1. Introduction
This section is not normative.
Incremental Font Transfer (IFT) is a technology to improve the latency of remote fonts (or "web fonts") on the web. Without this technology, a browser needs to download every last byte of a font before it can render any characters using that font. IFT allows the browser to download only some of the bytes in the file, thereby decreasing the perceived latency between the time when a browser realizes it needs a font and when the necessary text can be rendered with that font. Unlike traditional font subsetting approaches Incremental Font Transfer retains the encoding of layout rules between segments (Progressive Font Enrichment: Evaluation Report § fail-subset).
The success of WebFonts is unevenly distributed. This specification allows WebFonts to be used where slow networks, very large fonts, or complex subsetting requirements currently preclude their use. For example, even using WOFF 2 [WOFF2], fonts for CJK languages can be too large to be practical.
1.1. Technical Motivation: Evaluation Report
See the Progressive Font Enrichment: Evaluation Report [PFE-report] for the investigation which led to this specification.
1.2. Overview
An incremental font is a regular OpenType font that is reformatted to include incremental functionality, partly in virtue of two additional tables. Using these new tables the font can be augmented (eg. to cover more code points) by loading and applying patches to it.
The IFT technology has four main pieces:
-
§ 4 Extending a Font Subset: provides the algorithm that is used by a client to select and apply patches.
-
§ 5 Extensions to the Font Format: defines the new tables which contain a list of patches that are available to be applied to a font.
-
§ 6 Font Patch Formats: defines three different types of patches that can be used. Two are "generic" binary patches, one is specific to the font’s format for storing glyph data.
-
§ 7 Encoding: creates the font and associated patches that form an incremental font.
At a high level an incremental font is used like this:
-
The client downloads an initial font file, which contains some initial subset of data from the full version of the font along with embedded data describing the set of patches which can be used to extend the font.
-
Based on the content to be rendered, the client selects, downloads, and applies patches to extend the font to cover additional characters, layout features, and/or variation space. This step is repeated each time there is new content.
1.3. Creating an Incremental Font
It is expected that the most common way to produce an incremental font will be to convert an existing font to use the incremental encoding defined in this specification. At a high level converting an existing font to be incremental will look like this:
-
Choose the content of the initial subset, this will be included in the initial font file the client loads and usually consists of any data from the original font that is expected to always be needed.
-
Choose the patch type or types. Different patch types have different qualities, so different use cases can call for different patch types, or in some cases a mix of two types.
-
Choose a segmentation of the font. Individual segments will be added to the base subset by the client using patches. Choosing an appropriate segmentation is one of the most important parts of producing an efficient encoding.
-
Based on these choices, generate a set of patches, where each patch adds the data for a particular segment relative to either the initial font file or a previous patch.
-
Generate the initial font file including the initial subset and a patch mapping. This mapping lists all of the available patch files, the url they reside at, and information on what data the patch will add to the font.
Note: this is a highly simplified description of creating an incremental font, a more in-depth discussion of generating an encoding and requirements on the encoding can be found in the § 7 Encoding section.
1.4. Performance Considerations and the use of Incremental Font Transfer
Using incremental transfer may not always be beneficial, depending on the characteristics of the font, the network, and the content being rendered. This section provides non-normative guidance to help decide when incremental transfer should be utilized.
It is common for an incremental font to trigger the loading of multiple patches in parallel. So to maximize performance, when serving an incremental font it is recommended that an HTTP server which is capable of multiplexing (such as [rfc9113] or [rfc9114]) is used.
Incrementally loading a font has a fundamental performance trade off versus loading the whole font. Simplistically, under incremental transfer less bytes may be transferred at the potential cost of increasing the total number of network requests being made, and/or increased request processing latency. In general incremental font transfer will be beneficial where the reduction in latency from sending less bytes outweighs additional latency introduced by the incremental transfer method.
The first factor to consider is the language of the content being rendered. The evaluation report contains the results of simulating incremental font transfer across three categories of languages (Progressive Font Enrichment: Evaluation Report § langtype). See it’s conclusions Progressive Font Enrichment: Evaluation Report § conclusions for a discussion of the anticipated performance of incremental font transfer across the language categories.
Next, how much of the font is expected to be needed? If it’s expected that most of the font will be needed to render the content, then incremental font transfer is unlikely to be beneficial. In many cases however only part of a font is expected to be needed. For example:
-
If the font contains support for several languages but a user is expected to only render content in a subset of those languages.
-
If the content being rendered uses a small subset of the total characters in a font. This is often the case for Chinese, Japanese, Korean, Emoji, and Icon fonts.
-
Only a small amount of text is being rendered. For example a font that is only used for a headline.
An alternative to incremental transfer is to break a font into distinct subsets (typically by script) and use the unicode range feature of @font-face to load only the subsets needed. However, this can alter the rendering of some content Progressive Font Enrichment: Evaluation Report § fail-subset if there are layout rules between characters in different subsets. Incremental font transfer does not suffer from this issue as it can encompass the original font and all of it’s layout rules.
1.4.1. Reducing the Number of Network Requests
As discussed in the previous section the most basic implementation of incremental font transfer will tend to increase the total number of requests made vs traditional font loading. Since each augmentation will typically require at least one round trip time, performance can be negatively impacted if too many requests are made. Depending on which patch types are available and how much information is provided in the patch mapping, a client might preemptively request patches for code points that are not currently needed, but expected to be needed in the future. Intelligent use of this feature by an implementation can help reduce the total number of requests being made. The evaluation report explored this by testing the performance of a basic character frequency based code point prediction scheme and found it improved overall performance.
2. Opt-In Mechanism
Web pages can choose to opt-in to incremental transfer for a font via the use of a CSS font tech
keyword (CSS Fonts 4 § 11.1 Font tech) inside the ''@font-face'' block. The keyword incremental
is used to indicate the referenced font contains IFT data and should
only be loaded by a user agent which supports incremental font transfer.
@font-face { font-family: "MyCoolWebFont"; src: url("MyCoolWebFont-Incremental.otf") tech(incremental); }
@font-face { font-family: "MyCoolWebFont"; src: url("MyCoolWebFont-Incremental.otf") tech(incremental); unicode-range: U+0000-00FF; }
As shown in the second example, unicode-range can be used in conjuction with an IFT font. The unicode ranges should be set to match the coverage of the fully extended font. This will allow clients to avoid trying to load the IFT font if the font does not support any code points which are needed.
An alternative approach is to use the CSS supports mechanism which can make selections based on font tech:
@when font-tech(incremental) { @font-face { font-family: "MyCoolWebFont"; src: url("MyCoolWebFont-Incremental.otf"); } } @else { @font-face { font-family: "MyCoolWebFont"; src: url("MyCoolWebFont.otf"); } }
Note: Each individual @font-face
block may or may not opt-in to IFT. This is due to the
variety of ways fonts are used on web pages. Authors have control over which fonts they want to use
this technology with, and which they do not.
Note: the IFT tech keyword can be used in conjunction with other font tech specifiers to perform
font feature selection. For example a @font-face
could include two URIs one with tech(incremental, color-COLRv1)
and the other with tech(incremental, color-COLRv0)
.
2.1. Offline Usage
In some cases a user agent may wish to save a web page for offline use. Saved pages may be viewed while there is no network connection and thus it won’t be possible to request any additional patches referenced by an incremental font. Since it won’t be possible extend incremental fonts if content changes (eg. due to JavaScript execution), the page saving mechanism should fully expand the incremental font by invoking Fully Expand a Font Subset and replace references to the incremental font with the fully expanded one.
3. Definitions
3.1. Font Subset
A font subset is a modified version of a font file [iso14496-22] that contains only the data needed to render a subset of:
-
the code points,
supported by the original font. When a subsetted font is used to render text using any combination of the subset code points, layout features, or design-variation space it should render identically to the original font. This includes rendering with the use of any optional typographic features that a renderer may choose to use from the original font, such as hinting instructions. Design variation spaces are specified using the user-axis scales (OpenType Specification § otvaroverview#coordinate-scales-and-normalization).
A font subset definition describes the minimum data (code points, layout features, variation axis space) that a font subset should support.
Note: For convenience the remainder of this document links to the [open-type] specification which is a copy of [iso14496-22].
3.2. Font Patch
A font patch is a file which encodes changes to be made to an IFT-encoded font. Patches are used to extend an existing font subset and provide expanded coverage.
A patch format is a specified encoding of changes to be applied relative to a font subset. A set of changes encoded according to the format is a font patch. Each patch format has an associated patch application algorithm which takes a font subset and a font patch encoded in the patch format as input and outputs an extended font subset.
3.3. Patch Map
A patch map is an open type table which encodes a collection of mappings from font subset definitions to URIs which host patches that extend the incremental font. A patch map table encodes a list of patch map entries, where each entry has a key and value. The key is one or more font subset definition and the value is a URI, the § 6 Font Patch Formats used by the data at the URI, and the compatibility ID of the patch map table. More details of the format of patch maps can be found in § 5 Extensions to the Font Format.
Patch Map Entry summary:
Key | Value |
---|---|
|
|
3.4. Explanation of Data Types
Encoded data structures in the remainder of this specification are described in terms of the data types defined in OpenType Specification § otff#data-types. As with the rest of OpenType, all fields use "big-endian" byte ordering.
4. Extending a Font Subset
This section defines the algorithm that a client uses to extend an incremental font subset to cover additional code points, layout features and/or design space. It is an iterative algorithm which, repeatedly:
-
parses the font subset’s patch mappings into a list of available patches.
-
checks if any available patches match the content to be rendered.
-
selects one available patch, loads it, and then applies it.
This process repeats until no more relevant patches remain. Since a patch application may alter the patch mappings embedded in the font file, on each iteration the patch map in the current version of the font subset is reparsed to see what patches remain. Thus the font subset is on each iteration is the source of truth for what patches are available, and fully encapsulates the current state of the augmentation process.
4.1. Patch Invalidations
The patch mappings embedded in a font subset encode an invalidation mode for each patch. The invalidation mode for a patch marks which other patches will no longer be valid after the application of that patch. This invalidation mode is used by the extension algorithm to determine which patches are compatible and influences the order of selection. Patch validity during patch application is enforced by the compatibility ID from the § 5.2 Patch Map Table. Every patch has a compatibility ID encoded within it which needs to match the compatibility ID from the § 5.2 Patch Map Table which lists that patch.
There are three invalidation modes:
-
Full Invalidation: when this patch is applied all other patches currently listed in the font subset are invalidated. The compatibility ID in both the 'IFT ' and 'IFTX' § 5.2 Patch Map Table will be changed.
-
Partial Invalidation: when this patch is applied all other patches in the same § 5.2 Patch Map Table will be invalidated. The compatibility ID of only the § 5.2 Patch Map Table which contains this patch will be changed.
-
No Invalidation: no other patches will be invalidated by the application of this patch. The compatibility ID of the 'IFT ' and 'IFTX' § 5.2 Patch Map Table will not change.
The invalidation mode of a specific patch is encoded in its format number, which can be found in § 6.1 Formats Summary.
4.2. Default Layout Features
Most text shapers have a set of layout features which are always enabled and thus always required in an incrementally loaded font. Appendix A: Default Feature Tags collects a list of features that at the time of writing are known to be required by default in common shaper implementations. When forming a font subset definition as input to the extension algorithm the client should typically include all features found in Appendix A: Default Feature Tags in the subset definition. However, in some cases the client might know that the specific shaper which will be used may not make use of some features in Appendix A: Default Feature Tags and may opt to exclude those unused features from the subset definition.
4.3. Incremental Font Extension Algorithm
The following algorithm is used by a client to extend an incremental font subset to cover additional code points, layout features and/or design space. This algorithm incrementally selects and applies patches to the font one at a time, loading them as needed. As an important optimization to minimize network round trips it permits loads to be started for all patches that will eventually be needed. § 4.1 Patch Invalidations is used to determine which patches loads can be started for. Any patches which match the target subset definition and will not be invalidated by the next patch to be applied according to § 4.1 Patch Invalidations can be expected to be needed in future iterations.
Extend an Incremental Font Subset
The inputs to this algorithm are:
-
font subset: an incremental font subset.
-
initial font subset URI: an absolute URI which identifies the location of the initial incremental font that font subset was derived from.
-
target subset definition: the font subset definition that the client wants to extend font subset to cover.
The algorithm outputs:
-
extended font subset: an extended version of font subset. May or may not be an incremental font.
The algorithm:
-
Set extended font subset to font subset.
-
Load the 'IFT ' and 'IFTX' (if present) mapping tables from extended font subset. Both tables are formatted as a § 5.2 Patch Map Table. Check that they are valid according to the requirements in § 5.2 Patch Map Table. If either table is not valid, invoke Handle errors. If extended font subset does not have an 'IFT ' table, then it is not an incremental font and cannot be extended, return extended font subset.
-
If the compatibility ID in 'IFT ' is equal to the compatibility ID in 'IFTX' this is an error, invoke Handle errors.
-
For each of tables 'IFT ' and 'IFTX' (if present): convert the table into a list of entries by invoking Interpret Format 1 Patch Map or Interpret Format 2 Patch Map. Concatenate the returned entry lists into a single list, entry list.
-
For each entry in entry list invoke Check entry intersection with entry and target subset definition as inputs, if it returns false remove entry from entry list. Additionally remove any entries in entry list which have a patch URI which was loaded and applied previously during the execution of this algorithm.
-
If entry list is empty, then the extension operation is finished, return extended font subset.
-
Group the entries from entry list into 3 lists based on the invalidation mode of the patch format of each entry: full invalidation entry list, partial invalidation entry list, and no invalidation entry list.
-
Pick one entry with the following procedure:
-
If full invalidation entry list is not empty then, select exactly one of the contained entries. Follow the criteria in § 4.4 Selecting Invalidating Patches to select the single entry.
-
Otherwise if partial invalidation entry list is not empty then, select exactly one of the contained entries. Follow the criteria in § 4.4 Selecting Invalidating Patches to select the single entry.
-
Otherwise select exactly one of the entries in no invalidation entry list. The criteria for selecting the single entry is left up to the implementation to decide.
-
-
Start a load of patch file (if not already previously started) by invoking Load patch file with the initial font subset URI as the initial font URI and the entry patch URI as the patch URI. Additionally the client may optionally start loads using the same procedure for any entries in entry list which will not be invalidated by entry. The total number of patches that a client can load and apply during a single execution of this algorithm is limited to:
-
At most 100 patches which are Partial Invalidation or Full Invalidation.
-
At most 2000 patches of any type.
Can be loaded and applied during a single invocation of this algorithm. If either count has been exceeded this is an error invoke Handle errors.
-
-
Once the load for patch file is finished, apply it using the appropriate application algorithm (matching the patches format in entry) from § 6 Font Patch Formats to apply the patch file using the patch URI and the compatibility id from entry to extended font subset.
-
Go to step 2.
Note: in step 9 the client may optionally start loads for patches which are not invalidated by the currently selected entry. This is an important optimization which significantly improves performance by eliminating network round trips where possible by initiating loads for patches that will be needed in later iterations.
Note: if the only remaining intersecting entries are no invalidation entries the intersection check in step 5 does not need to be repeated on each iteration since no invalidation patches won’t change the list of available patches upon application (other then to remove the just applied patch).
Check entry intersection
The inputs to this algorithm are:
-
mapping entry: a patch map entry.
-
subset definition: a font subset definition.
The algorithm outputs:
-
intersects: true if subset definition intersects mapping entry, otherwise false.
The algorithm:
-
For each subset definition in mapping entry and each set in subset definition (code points, feature tags, design space) check if the set intersects the corresponding set from the mapping entry subset definition. A set intersects when:
subset definition set is empty subset definition set is not empty mapping entry set is empty true true mapping entry set is not empty false true if the two sets intersect When checking design space sets for intersection, they intersect if there is at least one pair of intersecting segments (tags are equal and the ranges intersect).
-
If all sets checked in step 1 intersect, then return true for intersects otherwise false.
mapping entry | subset definition | intersects? |
---|---|---|
subset definitions: [ { code points: {1, 2, 3}, feature tags: {}, design space: {} }, ], |
code points: {2}, feature tags: {}, design space: {}, | true |
subset definitions: [ { code points: {1, 2, 3}, feature tags: {}, design space: {} }, ], |
code points: {5}, feature tags: {}, design space: {}, | false |
subset definitions: [ { code points: {1, 2, 3}, feature tags: {}, design space: {} }, ], |
code points: {2}, feature tags: {smcp}, design space: {}, | true |
subset definitions: [ { code points: {1, 2, 3}, feature tags: {}, design space: {} }, ], |
code points: {}, feature tags: {smcp}, design space: {}, | false |
subset definitions: [ { code points: {1, 2, 3}, feature tags: {}, design space: {} }, { code points: {4, 5, 6}, feature tags: {}, design space: {} }, ], |
code points: {2}, feature tags: {}, design space: {}, | false |
subset definitions: [ { code points: {1, 2, 3}, feature tags: {}, design space: {} }, { code points: {4, 5, 6}, feature tags: {}, design space: {} }, ], |
code points: {2, 6}, feature tags: {}, design space: {}, | true |
Load patch file
The inputs to this algorithm are:
-
Patch URI: A URI Reference identifying the patch file to load. As a URI reference this may be a relative path.
-
Initial Font URI: An absolute URI which identifies the initial incremental font that the patch URI was derived from.
The algorithm outputs:
-
patch file: the content (bytes) identified by Patch URI.
The algorithm:
-
Perform reference resolution on Patch URI using Initial Font URI as the base URI to produce the target URI.
-
Retrieve the contents of target URI using the fetching capabilities of the implementing user agent. For web browsers, [fetch] should be used. When using [fetch] a request for patches should use the same CORS settings as the initial request for the IFT font. This means that for a font loaded via CSS the patch request would follow: CSS Fonts 4 § 4.8.2 Font fetching requirements.
-
Return the retrieved contents as patch file, or an error if the fetch resulted in an error.
Handle errors
If the extending the font subset process has failed with an error then, some of the data within the font may not be fully loaded and as a result rendering content which relies on the missing data may result in incorrect renderings. The client may choose to continue using the font, but should only use it for the rendering of code points, features, and design space that are fully loaded according to § 4.6 Determining what Content a Font can Render. Rendering of all other content should fallback to a different font following normal client fallback logic.
If the error occurred during Load patch file then, the client may continue trying to extend the font subset if there are remaining patches available other than the one(s) that failed to load. In the case of all other errors the client must not attempt to further extend the font subset.
4.4. Selecting Invalidating Patches
During execution of the Extend an Incremental Font Subset algorithm in some cases it’s necessary to select a single invalidating (full or partial) patch entry from a list of candidate entries. The selection criteria used has a direct impact on the total number of round trips that will be needed to perform the extension. Round trips are costly so for maximum performance patches should be selected in a way that minimizes the total number of needed round trips.
The following selection criteria minimizes round trips and must be used by the client when selecting a single partial or full invalidation patch in step 8 of Extend an Incremental Font Subset:
-
For each candidate entry compute the set intersection between each subset definition in the entry and the target subset definition. Union the resulting intersections together into a single subset definition.
-
Find an entry whose intersection subset definition from step 1 is not a strict subset of any other intersection subset definition.
-
Locate any additional entries that are in the same patch map and have the same intersection as the entry found in step 2. From the this set of entries (including the step 2 pick) the final selection is the entry which is listed first in the patch map. For § 5.2.1 Patch Map Table: Format 1 this is the entry with the lowest entry index. For § 5.2.2 Patch Map Table: Format 2 this is the entry that appears first in the entries array.
Note: a fast and efficient way to find an entry which satisfies the criteria for step 2 is to sort the entries (descending) by the size of the code point set intersection, then the size of feature tag set intersection, and finally the size of the design space intersection. The first entry in this sorting is guaranteed to not be a strict subset of any other entries, since any strict super sets would have to be at least one item larger. This approach also has the added benefit that it selects the patch which will add the most data which the client is currently requesting.
4.5. Target Subset Definition
The Extend an Incremental Font Subset algorithm takes as an input a target subset definition based on some content that the client wants to render. The client may choose to form one single subset definition for the content as a whole and run the extension algorithm once. Alternatively, the client may instead break the content up into smaller spans, form a subset definition for each span, and run the extension algorithm on each of the smaller subset definitions. Either approach will ultimately produce a font which equivalently renders the overall content as long as:
-
Each span of text which generates a subset definition is built from only one or more complete shaping units.
-
Where a shaping unit is a span of text which the client will process together as a single unit during text shaping.
4.6. Determining what Content a Font can Render
Given some incremental font (whether the initial font or one that has been partially extended) a client may wish to know what content that font can render in it’s current state. This is of particular importance where the client is looking to determine which portions of the text to use fallback fonts for.
During fallback processing a client would typically check the font’s cmap table to determine which code points are supported; However, in an IFT font due to the way § 6.3 Glyph Keyed patches work the cmap table may contain mappings for code points which do not yet have the corresponding glyph data loaded. As a result the client should not rely solely on the cmap table to determine code point presence. Instead the following procedure can be used by a client to check what parts of some content an incremental font can render:
-
Split the content up into the shaping units (see § 4.5 Target Subset Definition) on which the content will be processed during text shaping.
-
For each shaping unit there are two checks:
-
First, if for any code point in the shaping unit there is not a cmap entry for it, or the entry maps to glyph 0 then the incremental font does not fully support rendering the shaping unit.
-
Second, compute the corresponding font subset definition and execute the Extend an Incremental Font Subset algorithm, stopping at step 6. If the entry list is not empty then the incremental font does not fully support rendering the shaping unit.
-
-
Any shaping units that passed both checks can be rendered in their entirety with the font.
The client may also wish to know what the font can render at a more granular level than a shaping unit. The following pseudo code demonstrates a possible method for splitting a shaping unit which failed the above check up into spans which can be rendered using the incremental font:
# Returns a list of spans, [start, end] inclusive, within shaping_unit that are # supported by and can be safely rendered with ift_font. # # shaping_unit is an array where each item has a code point, associated list of # layout features, and the design space point that code point is being rendered with. def supported_spans ( shaping_unit , ift_font ): current_start = current_end = current_subset_def = None supported_spans = [] i = 0 while i < shaping_unit . length (): if current_subset_def is None : current_subset_def = SubsetDefinition () current_start = i current_end = i current_subset_def . add ( shaping_unit . codepoint_at ( i ), shaping_unit . features_at ( i ), shaping_unit . design_space_point_at ( i )) if supports_subset_def ( ift_font , current_subset_def ): i += 1 continue if current_end > current_start : supported_spans . append ( Span ( current_start , current_end - 1 )) # i isn’t incremented so the current code point can be checked on it’s own # in the next iteration. else : i += 1 current_start = current_end = current_subset_def = None return supported_spans # Returns true if ift_font has support for rendering content covered by subset_def. def supports_subset_def ( ift_font , subset_def ): # Return true only if both of the following two checks are true: # - Each code point in subset_def is mapped to a glyph id other than '0' by ift_font’s cmap table. # - After executing the "Extend an Incremental Font Subset" algorithm on ift_font with subset_def and stopping at step 6 the # entry list is empty.
Any text from the shaping unit which is not covered by one of the returned spans is not supported by the incremental font and should be rendered with a fallback font. Each span should be shaped in isolation (ie. each span becomes a new shaping unit). Because this method splits a shaping unit up, not all features of the original font, such as multi code point substitutions, may be present. If the client is correctly following the Extend an Incremental Font Subset algorithm with a subset definition formed according to § 4.5 Target Subset Definition then the missing data will be loaded and this case will only occur temporarily while the relevant patch is loading. Once the missing patch arrives and has been applied the rendering of the affected code points may change as a result of the substitution.
Note: The "supported_spans(...)" check above should not be used to drive incremental font extension. Target subset definitions for full executions of Extend an Incremental Font Subset should follow the guidelines in § 4.5 Target Subset Definition.
4.7. Fully Expanding a Font
This sections defines an algorithm that can be used to transform an incremental font into a fully expanded non-incremental font. This process loads all available data provided by the incremental font and produces a single static font file that contains no further patches to be applied.
Fully Expand a Font Subset
The inputs to this algorithm are:
-
font subset: an incremental font subset.
The algorithm outputs:
-
expanded font: an [open-type] font that is not incremental.
The algorithm:
-
Invoke Extend an Incremental Font Subset with font subset. The input target subset definition is a special one which is considered to intersect all entries in the Check entry intersection step. Return the resulting font subset as the expanded font.
4.8. Caching Extended Incremental Fonts
Incremental fonts that have been extended contain all of the state needed to perform any future extension operations according to the procedures in this section. So if an incremental font needs to be stored or cached for future use by a client it is sufficient to store only the font binary produced by the most recent application of the extension algorithm. It is not necessary to retain the initial font or any versions produced by prior extensions.
5. Extensions to the Font Format
An incremental font follows the existing OpenType format, but includes two new tables identified by the 4-byte tags 'IFT ' and 'IFTX'. These new tables are both patch maps. All incremental fonts must contain the 'IFT ' table. The 'IFTX' table is optional. When both tables are present, the mapping of the font as a whole is the union of the mappings of the two tables. The two new tables are used only in this specification and are not being added to the Open-Type specification.
Note: allowing the mapping to be split between two distinct tables allows an incremental font to more easily make use of multiple patch types. For example all patches of one type can be specified in the 'IFT ' table, and all patches of a second type in the 'IFTX' table. Those patches can make updates only to one of the mapping tables and avoid making conflicting updates.
5.1. Incremental Font Transfer and Font Compression Formats
It is common when using fonts on the web to compress them with a compression format such as [WOFF] or [WOFF2]. Formats such as these can be used to compress the initial font file used in an incremental font transfer encoding as long as:
-
The bytes of each table are unmodified by the process of encoding then decoding the font via the compression format.
-
Since the incremental font transfer extension algorithm (§ 4 Extending a Font Subset) operates specifically on the uncompressed font file, the compressed font needs to be decoded before attempting to extend it.
For [WOFF2] special care must be taken. If an incremental font will be encoded by WOFF2 for transfer:
-
If the WOFF2 encoding will include a transformed glyf and loca table (WOFF 2.0 § 5.1 Transformed glyf table format) then, the incremental font should not contain § 6.2 Table Keyed patches which modify either the glyf or loca table. The WOFF2 format does not guarantee the specific bytes that result from decoding a transformed glyf and loca table. § 6.3 Glyph Keyed patches may be used in conjunction with a transformed glyf and loca table.
-
The 'IFT ' and 'IFTX' tables can be processed and brotli encoded by a WOFF2 encoder following the standard process defined in WOFF 2.0 § 5 Compressed data format.
5.2. Patch Map Table
A patch map is encoded in one of two formats:
-
Format 1: a limited, but more compact encoding. It encodes a one-to-one mapping from glyph id to patch URIs. It does not support font subset definitions with design space or entries with overlapping subset definitions.
-
Format 2: can encode arbitrary mappings including ones with design space or overlapping subset definitions. However, it is typically less compact than format 1.
Each format defines an algorithm for interpreting bytes encoded with that format to produce the list of entries it represents. The Extend an Incremental Font Subset algorithm invokes the interpretation algorithms and operates on the resulting entry list. The encoded bytes are the source of truth at all times for the patch map. Patch application during subset extension will alter the encoded bytes of the patch map and as a result the entry list derived from the encoded bytes will change. The extension algorithm reinterprets the encoded bytes at the start of every iteration to pick up any changes made in the previous iteration.
5.2.1. Patch Map Table: Format 1
Format 1 Patch Map encoding:
Type | Name | Description |
---|---|---|
uint8 | format | Set to 1, identifies this as format 1. |
uint32 | reserved | Not used, set to 0. |
uint32 | compatibilityId[4] | Unique ID used to identify patches that are compatible with this font (see § 4.1 Patch Invalidations). The encoder chooses this value. The encoder should set it to a random value which has not previously been used while encoding the IFT font. |
uint16 | maxEntryIndex | The largest entry index encoded in this table. |
uint16 | maxGlyphMapEntryIndex | The largest Glyph Map entry index encoded in this table. Must be less than or equal to maxEntryIndex. |
uint24 | glyphCount |
Number of glyphs that mappings are provided for.
Must match the number of glyphs in the the font file.
Note: the number of glyphs in the font is encoded in the font file. At the time of writing, this value is listed in the maxp table; however, future font format extensions may use alternate tables to encode the value for number of glyphs. |
Offset32 | glyphMapOffset | Offset to a Glyph Map sub table. Offset is from the start of this table. |
Offset32 | featureMapOffset | Offset to a Feature Map sub table. Offset is from the start of this table. May be null (0). |
uint8 | appliedEntriesBitMap[(maxEntryIndex + 8)/8] | A bit map which tracks which entries have been applied. If bit i is set that indicates the patch for entry i has been applied to this font. Bit 0 is the least significant bit of appliedEntriesBitMap[0], while bit 7 is
the most significant bit. Bit 8 is the least significant bit of appliedEntriesBitMap[1] and so on.
|
uint16 | uriTemplateLength | Length of the uriTemplate string. |
uint8 | uriTemplate[uriTemplateLength] | A [UTF-8] encoded string. Contains a § 5.2.3 URI Templates which is used to produce URIs associated with each entry. Must be a valid [UTF-8] sequence. |
uint8 | patchFormat | Specifies the format of the patches linked to by uriTemplate. Must be set to one of the format numbers from the § 6.1 Formats Summary table. |
Note: glyphCount is designed to be compatible with the proposed future font format extension to allow for more than 65,535 glyphs.
Glyph Map encoding:
A glyph map table associates each glyph index in the font with an entry index.
Type | Name | Description |
---|---|---|
uint16 | firstMappedGlyph | All glyph indices less than firstMappedGlyph are implicitly mapped to entry index 0. |
uint8/uint16 | entryIndex[glyphCount - firstMappedGlyph] | The entry index for glyph i is stored in entryIndex[i - firstMappedGlyph]. Array members
are uint8 if maxEntryIndex is less than 256, otherwise they are uint16.
|
Feature Map encoding:
A feature map table associates combinations of feature tags and glyphs with an entry index.
Type | Name | Description |
---|---|---|
uint16 | featureCount | Number of featureRecords. |
FeatureRecord | featureRecords[featureCount] | Provides mappings for a specific feature tag. featureRecords are sorted by featureTag in ascending order with any feature tag occurring at most once. For sorting tag values are interpreted as a 4 byte big endian unsigned integer and sorted by the integer value. |
EntryMapRecord | entryMapRecords[variable] | Provides the key (entry index) for each feature mapping. The entryMapRecords array contains as many entries as the sum of the entryMapCount fields in the featureRecords array, with entryMapRecords[0] corresponding to the first entry of featureRecords[0], entryMapRecords[featureRecord[0].entryMapCount] corresponding to the first entry of featureRecords[1], entryMapRecords[featureRecords[0].entryMapCount + featureRecord[1].entryMapCount]] corresponding to the first entry of featureRecords[2], and so on. |
FeatureRecord encoding:
Type | Name | Description |
---|---|---|
Tag | featureTag | The feature tag this mapping is for. |
uint8/uint16 | firstNewEntryIndex | uint8 if maxEntryIndex is less than 256, otherwise uint16. The first entry index this record maps too. |
uint8/uint16 | entryMapCount | uint8 if maxEntryIndex is less than 256, otherwise uint16. The number of EntryMapRecords associated with this feature. |
EntryMapRecord encoding:
Type | Name | Description |
---|---|---|
uint8/uint16 | firstEntryIndex | uint8 if maxEntryIndex is less than 256, otherwise uint16. firstEntryIndex and lastEntryIndex specify the set of Glyph Map entries which form the subset definitions for the entries created by this mapping. |
uint8/uint16 | lastEntryIndex | uint8 if maxEntryIndex is less than 256, otherwise uint16. |
An entry map record matches any entry indices that are greater than or equal to firstEntryIndex and less than or equal to lastEntryIndex.
5.2.1.1. Interpreting Format 1
This algorithm is used to convert a format 1 patch map into a list of patch map entries.
Interpret Format 1 Patch Map
The inputs to this algorithm are:
-
patch map: a Format 1 Patch Map encoded patch map.
-
font subset: the font subset which contains patch map.
The algorithm outputs:
-
entry list: a list of patch map entries.
The algorithm:
-
Check that the patch map data is: complete and not truncated, has format equal to 1, and is valid according to the requirements in § 5.2.1 Patch Map Table: Format 1 (requirements are marked with a "must"). If it is not return an error.
-
For each unique entry index in entryIndex:
-
If entry index is 0 then, this is a special entry used to mark glyphs which are already in the initial font. Skip this index and do not build an entry for it.
-
If the entry index is larger than maxGlyphMapEntryIndex this entry is invalid, skip this entry index.
-
If the bit for entry index in appliedEntriesBitMap is set to 1, skip this entry index.
-
Collect the set of glyph indices that map to the entry index.
-
Convert the set of glyph indices to a set of Unicode code points using the code point to glyph mapping in the cmap table of font subset. Ignore any glyph indices that are not mapped by cmap. Multiple code points may map to the same glyph id. All code points associated with a glyph should be included.
-
Convert entry index into a URI by applying uriTemplate following § 5.2.3 URI Templates.
-
If the Unicode code point set is empty then, skip this entry index.
-
Add an entry to entry list with one subset definition which contains only the Unicode code point set and maps to the generated URI, the patch format specified by patchFormat, and compatibilityId.
-
-
If featureMapOffset is not null then, for each FeatureRecord and associated EntryMapRecord in featureRecords and entryMapRecords:
-
Any FeatureRecord’s whose featureTag is less than or equal to a featureTag of any FeatureRecord which occurred earlier in the list are invalid. All associated EntryMapRecord’s are skipped. For ordering, tag values are interpreted as a 4 byte big endian unsigned integer and ordered by the integer value.
-
Compute mapped entry index, the first EntryMapRecord associated with a FeatureRecord is FeatureRecord::firstNewEntryIndex, the second FeatureRecord::firstNewEntryIndex + 1, and so on. The last will be FeatureRecord::firstNewEntryIndex + entryMapCount - 1.
-
If the computed mapped entry index is less than or equal to maxGlyphMapEntryIndex or larger than maxEntryIndex this EntryMapRecord is invalid, skip it.
-
If EntryMapRecord::firstEntryIndex is greater than EntryMapRecord::lastEntryIndex this EntryMapRecord is invalid, skip it.
-
Convert mapped entry index into a URI by applying uriTemplate following § 5.2.3 URI Templates.
-
If the bit for mapped entry index in appliedEntriesBitMap is set to 1, skip this entry.
-
Construct a set of Unicode code points. For each entry index between EntryMapRecord::firstEntryIndex (inclusive) and EntryMapRecord::lastEntryIndex (inclusive):
-
If entry index is greater than maxGlyphMapEntryIndex then, this EntryMapRecord is invalid skip it.
-
Add the set of Unicode code points associated with entry index that was computed in step 2 to the set. If the entry index was skipped because it was 0 or appliedEntriesBitMap compute the set of associated code points as if it wasn’t skipped.
-
-
If the constructed set of Unicode code points is empty then, this EntryMapRecord is invalid skip it.
-
Add an entry to entry list which maps to the generated URI, the patch format specified by patchFormat, and compatibilityId; or if there is an existing entry in entry list which has the same patch URI as the generated URI then instead modify the existing entry. Add the constructed set of Unicode code points and featureTag to the new or existing entry’s single subset definition.
-
-
Return entry list.
Note: while an encoding is not required to include entries for all entry indices in [0, maxEntryIndex], it is recommended that it do so for maximum compactness.
5.2.1.2. Remove Entries from Format 1
This algorithm is used to remove entries from a format 1 patch map. This removal modifies the bytes of the patch map but does not change the number of bytes.
Remove Entries from Format 1 Patch Map
The inputs to this algorithm are:
-
patch map: a Format 1 Patch Map encoded patch map. May be modified by this procedure.
-
patch URI: URI for a patch which identifies the entries to be removed.
The algorithm:
-
Check that the patch map has format equal to 1 and is valid according to the requirements in § 5.2.1 Patch Map Table: Format 1. If it is not return an error.
-
For each unique entry index in entryIndex of patch map:
-
If the bit for entry index in appliedEntriesBitMap is set to 1, skip this entry index.
-
Convert entry index into a URI by applying uriTemplate following § 5.2.3 URI Templates.
-
If the generated URI is equal to patch URI then set the bit for entry index in appliedEntriesBitMap to 1.
-
5.2.2. Patch Map Table: Format 2
Format 2 Patch Map encoding:
Type | Name | Description |
---|---|---|
uint8 | format | Set to 2, identifies this as format 2. |
uint32 | reserved | Not used, set to 0. |
uint32 | compatibilityId[4] | Unique ID used to identify patches that are compatible with this font (see § 4.1 Patch Invalidations). The encoder chooses this value. The encoder should set it to a random value which has not previously been used while encoding the IFT font. |
uint8 | defaultPatchFormat | Specifies the format of the patches linked to by uriTemplate (unless overridden by an entry). Must be set to one of the format numbers from the § 6.1 Formats Summary table. |
uint24 | entryCount | Number of entries encoded in this table. |
Offset32 | entries | Offset to a Mapping Entries sub table. Offset is from the start of this table. |
Offset32 | entryIdStringData | Offset to a block of data containing the concatentation of all of the entry ID strings. May be null (0). Offset is from the start of this table. |
uint16 | uriTemplateLength | Length of the uriTemplate string. |
uint8 | uriTemplate[uriTemplateLength] | A [UTF-8] encoded string. Contains a § 5.2.3 URI Templates which is used to produce URIs associated with each entry. Must be a valid [UTF-8] sequence. |
Mapping Entries encoding:
Type | Name | Description |
---|---|---|
uint8 | entries[variable] | Byte array containing the encoded bytes of entryCount Mapping Entry's. Each entry has a variable length, which is determined following Interpret Format 2 Patch Map Entry. |
Mapping Entry encoding:
Type | Name | Description |
---|---|---|
uint8 | formatFlags | A bit field. Bit 0 (least significant bit) through bit 5 indicate the presence of optional fields. If bit 6 is set this entry is ignored. Bit 7 is reserved for future use and set to 0. |
uint8 | featureCount | Number of feature tags in the featureTags list. Only present if formatFlags bit 0 is set. |
Tag | featureTags[featureCount] | List of feature tags in the entry’s font subset definition. Only present if formatFlags bit 0 is set. |
uint16 | designSpaceCount | Number of elements in the design space list. Only present if formatFlags bit 0 is set. |
Design Space Segment | designSpaceSegments[designSpaceCount] | List of design space segments in the entry’s font subset definition. Only present if formatFlags bit 0 is set. |
uint8 | copyModeAndCount | The most significant bit is used to indicate the copy mode, if the bit is set copy mode is "append" otherwise it is "union". The remaining 7 bits are interpreted as a unsigned integer and represent the number of entries in the copyIndices list. This field is only present if formatFlags bit 1 is set. |
uint24 | copyIndices[copyModeAndCount] | List of indices from the entries array whose font subset definition should be copied into this entry. May only reference entries that occurred prior to this Mapping Entry in entries. Only present if formatFlags bit 1 is set. |
int24 | entryIdDelta | Signed delta which is used to calculate the id for this entry. The id for this entry is the entry id of the previous Mapping Entry + 1 + entryIdDelta. Only present if formatFlags bit 2 is set and entryIdStringData is null (0). If not present delta is assumed to be 0. |
uint16 | entryIdStringLength | The number of bytes that the id string for this entry occupies in the entryIdStringData data block. Only present if formatFlags bit 2 is set and entryIdStringData is not null (0). If not present the length is assumed to be 0. |
uint8 | patchFormat | Specifies the format of the patch linked to by this entry. Uses the ID numbers from the § 6.1 Formats Summary table. Overrides defaultPatchFormat. Only present if formatFlags bit 3 is set. |
uint16/uint24 | bias | Bias value which is added to all code point values in the code points set. If format bit 4 is 0 and bit 5 is 1, then this is present and a uint16. If format bit 4 is 1 and bit 5 is 1, then this is present and a uint24. Otherwise it is not present. |
uint8 | codePoints[variable] | Set of code points for this mapping. Encoded as a Sparse Bit Set. Only present if formatFlags bit 4 and/or 5 is set. The length is determined by following the decoding procedures in § 5.2.2.3 Sparse Bit Set. |
If an encoder is producing patches that will be stored on a file system and then served it’s recommended that only numeric entry IDs be used (via entryIdDelta) as these will generally produce the smallest encoding of the format 2 patch map. String IDs are useful in cases where patches are not being stored in advance and the ID strings can be then used to encode information about the patch being requested.
Design Space Segment encoding:
Type | Name | Description |
---|---|---|
Tag | tag | Axis tag value. |
Fixed | start | Start (inclusive) of the segment. This value uses the user axis scale: OpenType Specification § otvaroverview#coordinate-scales-and-normalization. |
Fixed | end | End (inclusive) of the segment. Must be greater than or equal to start. This value uses the user axis scale: OpenType Specification § otvaroverview#coordinate-scales-and-normalization. |
5.2.2.1. Interpreting Format 2
This algorithm is used to convert a format 2 patch map into a list of patch map entries.
Interpret Format 2 Patch Map
The inputs to this algorithm are:
-
patch map: a Format 2 Patch Map encoded patch map.
The algorithm outputs:
-
entry list: a list of patch map entries.
The algorithm:
-
Check that the patch map has format equal to 2 and is valid according to the requirements in § 5.2.2 Patch Map Table: Format 2 (requirements are marked with a "must"). If it is not return an error.
-
If the entryIdStringData offset is 0 then initialize last entry id to 0. Otherwise initialize it to an empty byte string. Set current byte to 0, and current id string byte to 0.
-
Invoke Interpret Format 2 Patch Map Entry, entryCount times. For each invocation:
-
pass in the bytes from patch map starting from entries[current byte] to the end of patch map, the bytes from patch map starting from entryIdStringData[current id string byte] to the end of patch map if entryIdStringData is non zero, last entry id, defaultPatchFormat, and uriTemplate.
-
Set last entry id to the returned entry id.
-
Add the returned consumed byte count to current byte.
-
Add the returned consumed id string byte count to current id string byte.
-
If the returned value of ignored is false, then set the compatibility ID of the returned entry to compatibilityId and add the entry to entry list.
-
-
Return entry list.
Interpret Format 2 Patch Map Entry
The inputs to this algorithm are:
-
entry bytes: a byte array that contains an encoded Mapping Entry.
-
id string bytes (optional): a byte array the contains entry ID strings.
-
last entry id: the entry id of the entry preceding this one.
-
default patch format: the default patch format if one isn’t specified.
-
uri template: the URI template used to locate patches.
The algorithm outputs:
-
entry id: the numeric or string id of this entry.
-
entry: a single entry.
-
consumed bytes: the number of bytes used to encode the entry.
-
consumed id string bytes: the number of bytes used to encode the entry id string.
-
ignored: if true, then this entry should be ignored.
The algorithm:
-
For the all steps whenever data is loaded from entry bytes increment consumed bytes with the number of bytes read.
-
If id string bytes is not present then, set entry id = last entry id + 1. Otherwise set entry id = last entry id.
-
Set the patch format of entry to default patch format.
-
Add a single font subset definition to entry with all sets initialized to be empty.
-
Read formatFlags from entry bytes.
-
If formatFlags bit 0 is set, then the feature tag and design space lists are present:
-
Read the feature tag list specified by featureCount and featureTags from entry bytes and add the loaded tags to the first font subset definition in entry.
-
Read the design space segment list specified by designSpaceCount and designSpaceSegments from entry bytes and add the design space segments to the first font subset definition in entry. Each segment defines an interval from start to end inclusive for the axis identified by tag. If any segment has a start which is greater than end then, this encoding is invalid return an error.
-
-
If formatFlags bit 1 is set, then the copy indices list is present:
-
Read the copy indices list specified by copyModeAndCount and copyIndices from entry bytes.
-
The copy indices refer to previously loaded entries. 0 is the first Mapping Entry in entries, 1 the second and so on. For each index in copyIndices locate the previously loaded entry with a matching index. If the most significant bit of copyModeAndCount is set then append all font subset definitions from the previous entry to entry. Otherwise union all code points, feature tags, and design space segments from all font subset definitions in the previous entry into the first font subset definition in entry. If a copyIndices is greater than or equal to the index of this entry then, this encoding is invalid return an error.
-
-
If formatFlags bit 2 is set, then an id delta or id string length is present:
-
If id string bytes is not present then, read the id delta specified by entryIdDelta from entry bytes and add the delta to entry id.
-
Otherwise if id string bytes is present then, read entryIdStringLength bytes from id string bytes and set entry id to the result.
-
-
If formatFlags bit 3 is set, then a patch format is present. Read the format specified by patchFormat from entry bytes and set the patch format of entry to the read value. If patchFormat is not one of the values in § 6.1 Formats Summary then, this encoding is invalid return an error.
-
If one or both of formatFlags bit 4 and bit 5 are set, then a code point list is present:
-
If formatFlags bit 4 is 0 and bit 5 is 1, then read the 2 byte (uint16) bias value from entry bytes.
-
If formatFlags bit 4 is 1 and bit 5 is 1, then read the 3 byte (uint24) bias value from entry bytes.
-
Otherwise the bias is 0.
-
Read the sparse bit set codePoints from entry bytes with bias following § 5.2.2.3 Sparse Bit Set. Add the resulting code point set to the first font subset definition in entry. If the sparse bit set decoding failed then, this encoding is invalid return an error.
-
-
If formatFlags bit 6 is set, then set ignored to true. Otherwise ignored is false.
-
If entry id is negative or greater than 4,294,967,295 then, this encoding is invalid return an error.
-
Convert entry id into a URI by applying uri template following § 5.2.3 URI Templates. Set the patch uri of entry to the generated URI.
-
Return entry id, entry, consumed bytes, entryIdStringLength as consumed id string bytes, and ignored.
5.2.2.2. Remove Entries from Format 2
This algorithm is used to remove entries from a format 2 patch map. This removal modifies the bytes of the patch map but does not change the number of bytes.
Remove Entries from Format 2 Patch Map
The inputs to this algorithm are:
-
patch map: a Format 2 Patch Map encoded patch map. May be modified by this procedure.
-
patch URI: URI for a patch which identifies the entries to be removed.
This algorithm is a modified version of Interpret Format 2 Patch Map, invoke Interpret Format 2 Patch Map with patch map as an input but with the following changes:
-
After step 11 of Interpret Format 2 Patch Map Entry: compare the URI generated in step 11 to patch URI if they are equal then, set bit 6 of formatFlags to 1.
-
The return value of Interpret Format 2 Patch Map is not used.
5.2.2.3. Sparse Bit Set
A sparse bit set is a data structure which compactly stores a set of distinct unsigned integers. The set is represented as a tree where each node has a fixed number of children that recursively sub-divides an interval into equal partitions. A tree of height H with branching factor B can store set membership for integers in the interval [0 to BH-1] inclusive. The tree is encoded into an array of bytes for transport.
In the context of a Format 2 Patch Map a sparse bit set is used to store a set of Unicode code points. As such integer values stored in a sparse bit set are restricted to being Unicode code point values in the range 0 to 0x10FFFF.
Sparse Bit Set encoding:
Type | Name | Description |
---|---|---|
uint8 | header | Bits 0 (least significant) and 1 encode the trees branch factor B via Branch Factor Encoding. Bits 2 through 6 are a 5-bit unsigned integer which encodes the value of H. Bit 7 is set to 0 and reserved for future use. |
uint8 | treeData[variable] | Binary encoding of the tree. |
The exact length of treeData is initially unknown, it’s length is determined by executing the decoding algorithm. When using branch factors of 2 or 4 the last node may only partially consume the bits in a byte. In that case all remaining bits are unused and ignored.
Branch Factor Encoding:
Bit 1 | Bit 0 | Branch Factor (B) | Maximum Height (H) |
---|---|---|---|
0 | 0 | 2 | 31 |
0 | 1 | 4 | 16 |
1 | 0 | 8 | 11 |
1 | 1 | 32 | 7 |
Sparse bit sets that have an encoded height (H) which is larger than the maximum height for the encoded branch factor (B) in the above table are invalid.
Decoding sparse bit set treeData
The inputs to this algorithm are:
-
treeData: array of bytes to be decoded.
-
bias: unsigned integer value added to each decoded set member.
The algorithm outputs:
-
S: a set of unsigned integers.
The algorithm, using a FIFO (first in first out) queue Q:
-
Remove the first byte from treeData. This is the header byte. Determine H the tree height and B the branch factor following Sparse Bit Set.
-
If H is greater than the "Maximum Height" in the Branch Factor Encoding table in the row for B then, the encoding is invalid, return an error.
-
If H is equal to 0, then this is an empty set and no further bytes of treeData are consumed. Return an empty set.
-
Insert the tuple (0, 1) into Q.
-
Initialize S to an empty set.
-
treeData is interpreted as a string of bits where the least significant bit of treeData[0] is the first bit in the string, the most significant bit of treeData[0] is the 8th bit, and so on.
-
If in the following steps a value is added to S which is larger than the maximum unicode code point value (0x10FFFF) then, ignore the value and do not add it to S.
-
If Q is empty return S.
-
Extract the next tuple t from Q. The first value of in t is start and the second value is depth.
-
Remove the next B bits from the treeData bit string. The first removed bit is v1, the second is v2, and so on until the last removed bit which is vB. If prior to removal there were less than B bits left in treeData, then treeData is malformed, return an error.
-
If all bits v1 through vB are 0, then insert all integers in the interval [start + bias, start + bias + BH - depth + 1) into S. Go to step 5.
-
For each vi which is equal to 1 in v1 through vB: If depth is equal to H add integer start + bias + i - 1 to S. Otherwise, insert the tuple (start + (i - 1) * BH - depth, depth + 1) into Q.
-
Go to step 8.
Note: when encoding sparse bit sets the encoder can use any of the possible branching factors, but it is recommended to use 4 as that has been shown to give the smallest encodings for most unicode code point sets typically encountered.
bit string: |-- header --|- lvl 0 |---- level 1 ----|------- level 2 -----------| | B=8 H=3 | n0 | n1 n2 | n3 n4 n5 | [ 01 11000 0 10000100 10001000 10000000 00100000 01000000 00010000 ] Which then becomes the byte string: [ 0b00001110, 0b00100001, 0b00010001, 0b00000001, 0b00000100, 0b00000010, 0b00001000 ]
bit string: |-- header -- | | B=2 H=0 | [ 00 00000 0 ] Which then becomes the byte string: [ 0b00000000, ]
bit string: |-- header --| l0 |- lvl 1 -| l2 | | B=4 H=3 | n0 | n1 | n2 | n3 | [ 10 11000 0 1100 0000 1000 1100 ] byte string: [ 0b00001101, 0b00000011, 0b00110001 ]
5.2.3. URI Templates
URI templates [rfc6570] are used to convert numeric or string IDs into URIs where patch files are located. A string ID is a sequence of bytes. Several variables are defined which are used to produce the expansion of the template:
Variable | Value |
---|---|
id
| The input id encoded as a base32hex string (using the digits 0-9, A-V) with padding omitted. When the id is an unsigned integer it must first be converted to a big endian 32 bit unsigned integer, but then all leading bytes that are equal to 0 are removed before encoding. (For example, when the integer is less than 256 only one byte is encoded.) When the input id is a string the raw bytes are encoded as base32hex. |
d1
| The last character of the string in the id variable.
If id variable is empty then, the value is the character _ (U+005F).
|
d2
| The second last character of the string in the id variable.
If the id variable has less than 2 characters then, the value is the character _ (U+005F).
|
d3
| The third last character of the string in the id variable.
If the id variable has less than 3 characters then, the value is the character _ (U+005F).
|
d4
| The fourth last character of the string in the id variable.
If the id variable has less than 4 characters then, the value is the character _ (U+005F).
|
id64
| The input id encoded as a base64url string (using the digits A-Z, a-z, 0-9, - (minus) and _ (underline)) with padding included. Because the padding character is '=', it must be URL-encoded as "%3D'. When the id is an unsigned integer it must first be converted to a big endian 32 bit unsigned integer, but then all leading bytes that are equal to 0 are removed before encoding. (For example, when the integer is less than 256 only one byte is encoded.) When the input id is a string its raw bytes are encoded as base64url. |
Some example inputs and the corresponding expansions:
Template | Input ID | Expansion |
---|---|---|
//foo.bar/{id} | 123 | //foo.bar/FC |
//foo.bar{/d1,d2,id} | 478 | //foo.bar/0/F/07F0 |
//foo.bar{/d1,d2,d3,id} | 123 | //foo.bar/C/F/_/FC |
//foo.bar{/d1,d2,d3,id} | baz | //foo.bar/K/N/G/C9GNK |
//foo.bar{/d1,d2,d3,id} | z | //foo.bar/8/F/_/F8 |
//foo.bar{/d1,d2,d3,id} | àbc | //foo.bar/O/O/4/OEG64OO |
//foo.bar{/id64} | 14,000,000 | //foo.bar/1Z-A |
//foo.bar{/id64} | 17,000,000 | //foo.bar/AQNmQA%3D%3D |
//foo.bar{/id64} | àbc | //foo.bar/w6BiYw%3D%3D |
//foo.bar/{+id64} | àbcd | //foo.bar/w6BiY2Q= |
6. Font Patch Formats
In incremental font transfer font subsets are extended by applying patches. This specification defines two patch formats, each appropriate to its own set of augmentation scenarios. A single encoding can make use of more than one patch format.
6.1. Formats Summary
The following patch formats are defined by this specification:
-
§ 6.2 Table Keyed: a collection of brotli encoded binary diffs that use tables from a font subset as bases.
-
§ 6.3 Glyph Keyed: a collection of opaque binary blobs, each associated with a glyph id and table.
More detailed descriptions of each algorithm can be found in the following sections.
The following format numbers are used to identify the patch format and invalidation mode in the § 5.2 Patch Map Table:
Format Number | Name | Invalidation |
---|---|---|
1 | § 6.2 Table Keyed | Full Invalidation |
2 | § 6.2 Table Keyed | Partial Invalidation |
3 | § 6.3 Glyph Keyed | No Invalidation |
6.2. Table Keyed
A table keyed patch contains a collection of patches which are applied to the individual font tables in the input font file. Each table patch is encoded with brotli compression using the corresponding table from the input font file as a shared LZ77 dictionary. A table keyed encoded patch consists of a short header followed by one or more brotli encoded patches. In addition to patching tables, patches may also replace (existing table data is not used) or remove tables in a font subset.
Table keyed patch encoding:
Type | Name | Description |
---|---|---|
Tag | format | Identifies the format as table keyed, must be set to 'iftk' |
uint32 | reserved | Reserved for future use, set to 0. |
uint32 | compatibilityId[4] | The id of the font subset which this patch can be applied too. See § 4.1 Patch Invalidations. |
uint16 | patchesCount | The number of entries in the patches array. |
Offset32 | patches[patchesCount+1] | Each entry is an offset from the start of this table to a TablePatch. Offsets must be sorted in ascending order. |
The difference between two consecutive offsets in the patches array gives the size of that TablePatch.
TablePatch encoding:
Type | Name | Description |
---|---|---|
Tag | tag | The tag that identifies the font table this patch applies too. |
uint8 | flags | Bit-field. If bit 0 (least significant bit) is set this patch replaces the existing table. If bit 1 is set this table is removed. |
uint32 | maxUncompressedLength | The maximum uncompressed length of brotliStream. |
uint8 | brotliStream[variable] | Brotli encoded byte stream. |
6.2.1. Applying Table Keyed Patches
This patch application algorithm is used to apply a table keyed patch to extend a font subset to cover additional code points, features, and/or design-variation space.
Apply table keyed patch
The inputs to this algorithm are:
-
base font subset: a font subset which is to be extended.
-
patch: a table keyed patch to be applied to base font subset.
-
compatibility id: The ID number from the 'IFT ' or 'IFTX' table of base font subset which listed this patch.
The algorithm outputs:
-
extended font subset: a font subset that has been extended by the patch.
The algorithm:
-
Initialize extended font subset to be an empty font with no tables.
-
Check that the patch is valid according to the requirements in § 6.2 Table Keyed (requirements are marked with a "must") and all TablePatch's are contained within patch. Otherwise, return an error
-
Check that the compatibilityId field in patch is equal to compatibility id. If there is no match, or base font subset does not have either an 'IFT ' or 'IFTX' table, then patch application has failed, return an error.
-
In the following steps, adding a table to extended font subset consists of adding the table’s data to the font and inserting a new entry into the table directory according to the requirements of the open type specification. That entry includes a checksum for the table data. When an existing table is copied unmodified, the client can re-use the checksum from the entry in the source font. Otherwise a new checksum will need to be computed.
-
For each entry in patches, with index i:
-
Find the TablePatch associated with index i. The object starts at the offset patches[i] (inclusive) and ends at the offset patches[i+1] (exclusive). Both offsets are relative to the start of the patch.
-
If an entry in patches was previously applied that has the same tag as this entry, then ignore this entry and continue the iteration to the next one. Entries are processed in same order as they are listed in the patches array.
-
If bit 1 of flags is set, then do not copy or add a table to extended font subset identified by tag. Continue to the next entry.
-
If bit 0 (least significant bit) of flags is set, then decode brotliStream following Brotli Compressed Data Format § section-10. No shared dictionary is used. If the decoded data is larger than maxUncompressedLength return an error. If there is any data in brotliStream which was not used by the decoding process return an error. Add a table to extended font subset identified by tag with it’s contents set to the decoded brotliStream. Continue to the next entry.
-
Otherwise, decode brotliStream following Brotli Compressed Data Format § section-10 and using the table identified by tag in base font subset as a shared LZ77 dictionary. If no such table exists return an error. If the decoded data is larger than maxUncompressedLength return an error. If there is any data in brotliStream which was not used by the decoding process return an error. Add a table to extended font subset identified by tag with it’s contents set to the decoded brotliStream.
-
-
For each table in base font subset which has a tag that was not found in any of the entries processed in step 5, add a copy of that table to extended font subset.
6.3. Glyph Keyed
A glyph keyed patch contains a collection of data chunks that are each associated with a glyph index and a font table. The encoded data replaces any existing data for that glyph index in the referenced table. Glyph keyed patches can encode data for glyf/loca, gvar, CFF, and CFF2 tables.
Glyph keyed patch encoding:
Type | Name | Description |
---|---|---|
Tag | format | Identifies the format as glyph keyed, must be set to 'ifgk' |
uint32 | reserved | Reserved for future use, set to 0. |
uint8 | flags | Bit-field. If bit 0 (least significant bit) is set then glyphIds uses uint24’s, otherwise it uses uint16’s. |
uint32 | compatibilityId[4] | The compatibility id of the font subset which this patch can be applied too. See § 4.1 Patch Invalidations. |
uint32 | maxUncompressedLength | The maximum uncompressed length of brotliStream. |
uint8 | brotliStream[variable] | Brotli encoded GlyphPatches table. |
GlyphPatches encoding:
Type | Name | Description |
---|---|---|
uint32 | glyphCount | The number of glyphs encoded in the patch. |
uint8 | tableCount | The number of tables the patch has data for. |
uint16/uint24 | glyphIds[glyphCount] | An array of glyph indices included in the patch. Elements are uint24’s if bit 0 (least significant bit) of flags is set, otherwise elements are uint16’s. Must be in ascending sorted order and must not contain any duplicate values. |
Tag | tables[tableCount] | An array of tables (by tag) included in the patch. Must be in ascending sorted order and must not contain any duplicate values. For sorting tag values are interpreted as a 4 byte big endian unsigned integer and sorted by the integer value. |
Offset32 | glyphDataOffsets[glyphCount * tableCount + 1] | An array of offsets of to glyph data for each table. The first glyphCount offsets corresponding to tables[0], the next glyphCount offsets (if present) corresponding to tables[1], and so on. All offsets are from the start of the GlyphPatches table. Offsets must be sorted in ascending order. |
uint8 | glyphData[variable] | The actual glyph data picked out by the offsets. |
The difference between two consecutive offsets in the glyphDataOffsets array gives the size of that glyph data.
6.3.1. Applying Glyph Keyed Patches
This patch application algorithm is used to apply a glyph keyed patch to extend a font subset to cover additional code points, features, and/or design-variation space.
Apply glyph keyed patch
The inputs to this algorithm are:
-
base font subset: a font subset which is to be extended.
-
patch: a glyph keyed patch to be applied to base font subset.
-
patch uri: the URI where the patch data is located.
-
compatibility id: The compatibility ID from the 'IFT ' or 'IFTX' table of base font subset which listed this patch.
The algorithm outputs:
-
extended font subset: a font subset that has been extended by the patch.
The algorithm:
-
Check that the patch is valid according to the requirements in § 6.3 Glyph Keyed (requirements are marked with a "must"). Otherwise, return an error
-
Check that the compatibilityId field in patch is equal to compatibility id. If there is no match, or base font subset does not have either an 'IFT ' or 'IFTX' table, then patch application has failed, return an error.
-
Decode the brotli encoded data in brotliStream following Brotli Compressed Data Format § section-10. The decoded data is a GlyphPatches table. If the decoded data is larger than maxUncompressedLength return an error
-
For each font table listed in tables, with index
i
:-
Using the corresponding table in base font subset, synthesize a new table where the data for each glyph is replaced with the data corresponding to that glyph index from glyphData if present, otherwise copied from the corresponding table in base font subset for that glyph index.
-
The patch glyph data for a glyph index is located by finding glyphIds[j] equal to the glyph index. The offset to the associated glyph data is glyphDataOffsets[i * glyphCount + j]. The length of the associated glyph data is glyphDataOffsets[i * glyphCount + j + 1] minus glyphDataOffsets[i * glyphCount + j].
-
The specific process for synthesizing the new table, depends on the specified format of the table. Any non-glyph associated data should be copied from the table in base font subset. Tables of the type glyf, gvar, CFF, or CFF2 are supported. Entries for tables of any other types must be ignored. When updating glyf the loca table must be updated as well. No other tables in the font can be modified as a result of this step. Notably this means that a patch cannot add glyphs with indices beyond the numGlyphs specified in maxp.
-
If base font subset does not have a matching table, return an error.
-
Insert the synthesized table into extended font subset.
-
-
Locate the § 5.2 Patch Map Table which has the same compatibilityId as compatibility id. If it is a format 1 patch map then, invoke Remove Entries from Format 1 Patch Map with the patch map table and patch uri as an input. Otherwise if it is a format 2 patch map then, invoke Remove Entries from Format 2 Patch Map with the patch map table and patch uri as an input. Copy the modified patch map table into extended font subset.
-
For each table in base font subset which has a tag that was not found in any of the entries processed in step 4 or 5, add a copy of that table to extended font subset.
-
If the contents of any font table was modified during the previous steps then, for each modified table: update the checksums in the fonts table directory to match the table’s new contents.
7. Encoding
An encoder is a tool which produces an incremental font and set of associated patches. "Encoding" refers to the process of using an encoder, including whatever parameters an encoder requires or allows to influence the result in a particular case. The incremental font and associated patches produced by a compliant encoder:
-
Must meet all of the requirements in § 5 Extensions to the Font Format and § 6 Font Patch Formats.
-
Must be consistent, that is: for any possible font subset definition the result of invoking Extend an Incremental Font Subset with that subset definition and the incremental font must always be the same regardless of the particular order of patch selection chosen in step 8 of Extend an Incremental Font Subset.
-
Must respect patch invalidation criteria. Any patch which is part of an IFT encoding when applied to a compatible font subset must only make changes to the patch map compatibility IDs which are compliant with the § 4.1 Patch Invalidations criteria for the invalidation mode declared by the associated patch map entries.
-
When an encoder is used to transform an existing font into an incremental font the associated fully expanded font should be equivalent to the existing font. An equivalent fully expanded font should have all of the same tables as the existing font (excluding the incremental IFT/IFTX tables) and each of those tables should be functionally equivalent to the corresponding table in the existing font. Note: the fully expanded may not always be an exact binary match with the existing font.
-
Should preserve the functionality of the fully expanded font throughout the augmentation process, that is: given the fully expanded font derived from the incremental font and any content, then the font subset produced by invoking Extend an Incremental Font Subset with the incremental font and the minimal subset definition covering that content should render identically to the fully expanded font for that content.
When an encoder is used to transform an existing font file into and incremental font and a client is implemented according to the other sections of this document, the intent of the IFT specification is that appearance and behavior of the font in the client will be the same as if the entire file were transferred to the client. A primary goal of the IFT specification is that the IFT format and protocol can serve as a neutral medium for font transfer, comparable to WOFF2. If an encoder produces an encoding from a source font which meets all of the above requirements (1. through 5.), then the encoding will preserve all of the functionality of the original font. Requirement 4 above ensures that all of the functionality in the original font can be reached. This works in conjunction with requirement 5, which requires that partial versions of an IFT font have equivalent functionality as the full version (original font here) for content which is a subset of the subset definition used to derive the partial font.
This may be important for cases where a foundry or other rights-owner of a font wants be confident that the encoding and transfer of that font using IFT will not change its behavior and therefore the intent of the font’s creators. Licenses or contracts might then include requirements about IFT conformance, and situations in which re-encoding a font in WOFF2 format is de facto permissible due to its content-neutrality might also permit IFT encoding of that font.
However, nothing about these requirements on encoding conformance is meant to rule out or deprecate the possibility and practical use of encodings that do not preserve all of the functionality of a source font. Any encoding meeting the minimum requirements (1., 2. and 3. above) is valid and may have an appropriate use. Under some circumstances it might be desirable for an encoded font to omit support for some functionality/data from all of its patch files even if those were included in the original font file. In other cases a font might be directly encoded into the IFT format from font authoring source files. In cases where an encoder chooses not to meet requirement 4 above it is still strongly encouraged to meet 5, which ensures consistent behavior of the font throughout the augmentation process.
7.1. Encoding Considerations
This section is not normative.
The details of the encoding process may differ by encoder and are beyond the scope of this document. However, this section provides guidance that encoder implementations may want to consider, and that can be important to reproducing the appearance and behavior of an existing font file when producing an incremental version of that font. The guidance provided in this section is based on the experience of building an encoder implementation during development of this specification. It represents the best understanding (at the time of writing) of how to generate a high performance encoding which meets requirements 1 through 4 of § 7 Encoding and thus preserves all functionality/behavior of the original font being encoded.
About § 6.2 Table Keyed patches
A § 6.2 Table Keyed patch can change the contents of some font tables and not others. Each patched table typically needs to be relative to a specific table content, but other tables can have different contents. Therefore as long as a § 6.2 Table Keyed patch does not alter the tables containing glyph data it can be compatible with § 6.3 Glyph Keyed patches and therefore be only Partially Invalidating (in that it will invalidate other § 6.2 Table Keyed patches but not § 6.3 Glyph Keyed patches). Additionally two sets of § 6.2 Table Keyed patches can be independent of each other if they do not modify any of the same tables. For example, one could use § 6.2 Table Keyed patches for all content other than the glyph tables but then use another set of § 6.2 Table Keyed patches for those tables rather than § 6.3 Glyph Keyed patches, and each of these could in theory be Partially Invalidating—leaving them mutually dependent but independent of one another.
An application of a § 6.2 Table Keyed patch will typically alter the IFT or IFTX table it was was listed in to add a new set of patches to further extend the font. This means that the total set of § 6.2 Table Keyed patches forms a graph, in which each font subset in the segmentation is a node and each patch is an edge. This also means that patches of these types are typically downloaded and applied in series, which has implications for the performance of this patch type relative to latency.
About § 6.3 Glyph Keyed patches
§ 6.3 Glyph Keyed patches are quite distinct from the other patch types. First, § 6.3 Glyph Keyed patches can only modify tables containing glyph outline data, and therefore an incremental font that only uses § 6.3 Glyph Keyed must include all other font table data in the initial font file. Second, § 6.3 Glyph Keyed patches are not Invalidating, and can therefore be downloaded and applied independently. This independence means multiple patches can be downloaded in parallel which can significantly reduce the number of round trips needed relative to the invalidating patch types.
Choosing patch formats for an encoding
All encodings must chose one or more patch types to use. § 6.2 Table Keyed patches allow all types of data in the font to be patched, but because this type is at least Partially Invalidating, the total number of patches needed increases exponentially with the number of segments rather than linearly. § 6.3 Glyph Keyed patches are limited to updating outline and variation delta data but the number needed scales linearly with number of segments.
In addition to the number of patches, the encoder should also consider the number of network round trips that will be needed to get patches for typical content. For invalidating patch types it is necessary to make patch requests in series. This means that if some content requires multiple segments then, multiple network round trips may be needed. Glyph keyed patches on the other hand are not invalidating and the patches can be fetched in parallel, needing only a single round trip.
At the extremes of the two types, § 6.2 Table Keyed patches, are most appropriate for fonts with sizable non-outline data that only require a small number of patches. § 6.3 Glyph Keyed patches are most appropriate for fonts where the vast majority of data consists of glyph outlines, which is true of many existing CJK fonts.
For fonts that are in-between, or in cases where fine-grained segmentation of glyph data is desired but segmentation of data in other tables is still needed, it can be desirable to mix the § 6.2 Table Keyed and § 6.3 Glyph Keyed patch types in this way:
-
Keep all table keyed patch entries in one mapping table and all glyph keyed entries in the other mapping table.
-
Use table keyed patches to update all tables except for the tables touched by the glyph keyed patches (outline, variation deltas, and the glyph keyed patch mapping table). These patches should use a small number of large segments to keep the patch count reasonable.
-
Because glyph keyed patches reference the specific glyph IDs that are updated, the table keyed patches must not change the glyph to glyph ID assignments used in the original font; otherwise, the glyph IDs listed in the glyph keyed patches may become incorrect. In font subsetters this is often available as an option called "retain glyph IDs".
-
Lastly, use glyph keyed patches to update the remaining tables, here much smaller fine-grained segments can be utilized without requiring too many patches.
The mixed patch type encoding is expected to be a commonly used approach since many fonts will fall somewhere in between the two extremes.
Reducing round trips with invalidating patches
One way to reduce the number of round trips needed with a segmentation that uses one of the invalidating patch types is to provide patches that add multiple segments at once (in addition to patches for single segments). For example consider a font which has 4 segments: A, B, C, and D. The patch table could list patches that add: A, B, C, D, A + B, A + C, A + D, B + C, B + D, and C + D. This would allow any two segments to be added in a single round trip. The downside to this approach is that it further increases the number of unique patches needed.
Entry order in encoded patch mappings
In § 4.4 Selecting Invalidating Patches the client uses the order of an entry in the patch map to break ties when selecting which patches to load and apply. Clients are aiming to reduce total transfer sizes so when multiple entries are tied in intersection size it’s generally in the client’s interest to choose the patch which has the smallest total transfer size. As a result encoders should place entries in the patch map ordered from smallest to largest size in bytes. This will ensure clients that when clients follow § 4.4 Selecting Invalidating Patches they bias towards smaller patches and maximize performance.
Managing the number of patches
Using § 6.2 Table Keyed patches along side a large number of segments can result in a very large number of patches needed, which can have two negative effects. First, the storage space needed for all of the pre-generated patches could be undesirably large. Second, more patches will generally mean lower CDN cache performance, because a higher number of patches represents a higher number of paths from a given subset to a given subset, with different paths being taken by different users depending on the content they access. There are some techniques which can be used to reduce the total number of pre-generated patches:
-
Use a maximum depth for the patch graph, once this limit is reached the font is patched to add all remaining parts of the full original font. This will cause the whole remaining font to be loaded after a certain number of segments have been added. Limiting the depth of the graph will reduced the combinatorial explosion of patches at the lower depths.
-
Alternatively, at lower depths the encoder could begin combining multiple segments into single patches to reduce the fan out at each level.
Choosing segmentations
One of the most important and complex decisions an encoder needs to make is how to segment the data in the encoded font. The discussion above focused on the number of segments, but the performance of an incremental font depends much more on the grouping of data within segments. To maximize efficiency an encoder needs to group data (eg. code points) that are commonly used together into the same segments. This will reduce the amount of unneeded data load by clients when extending the font. The encoder must also decide the size of segments. Smaller segments will produce more patches, and thus incur more overhead by requiring more network requests, but will typically result in less unneeded data in each segment compared to larger segments. When segmenting code points, data on code point usage frequency can be helpful to guide segmentation.
Some code points may have self-evident segmentations, or at least there may be little question as to what code points to group together. For example, upper and lowercase letters of the Latin alphabet form a natural group. Other cases may be more complicated. For example, Chinese, Japanese, and Korean share some code points, but a code point that is high-frequency in Japanese may be lower-frequency in Chinese. In some cases one might choose to optimize an encoding for a single language. Another approach is to produce a compromise encoding. For example, when segmenting an encoder could put code points that are frequent in Japanese, Chinese, and Korean into one segment, and then those that are frequent only in Japanese and Chinese into another segment, and so on. Then the code points that are frequent in only one language can be handled in the usual way. This will result in less even segment sizes, but means that loading high-frequency patches for any one of the languages will not pull in lower-frequency glyphs.
Include default layout features
Appendix A: Default Feature Tags collects a list of layout features which are commonly used by default. Since the features in this list will typically always be used by shapers, for best performance encoders should typically not make any of these features optional in the encoding of a font.
Maintaining Functional Equivalence
As discussed in § 7 Encoding an encoder should preserve the functionality of the original font. Fonts are complex and often contain interactions between code points so maintaining functional equivalence with a partial copy of the font can be tricky. The next two subsections discuss maintaining functional equivalent using the different patch types.
Table keyed patches
When preparing § 6.2 Table Keyed patches, one means of achieving functional equivalence is to leverage an existing font subsetter implementation to produce font subsets that retain the functionality of the original font. The IFT patches can then be derived from these subsets.
A font subsetter produces a font subset from an input font based on a desired font subset definition. The practice of reliably subsetting a font is well understood and has multiple open-source implementations (a full formal description is beyond the scope of this document). It typically involves a reachability analysis, where the data in tables is examined relative to the font subset definition to see which portions can be reached by any possible content covered by the subset definition. Any reachable data is retained in the generated font subset, while any unreachable data may be removed.
In the following example pseudo code a font subsetter is used to generate an IFT encoded font that utilizes only table keyed patches:
# Encode a font (full_font) into an incremental font that starts at base_subset_def # and can incrementally add any of subset_definitions. Returns the IFT encoded font # and set of associated patches. encode_as_ift ( full_font , base_subset_def , subset_definitions ): base_font = subset ( full_font , base_subset_def ) base_font , patches = encode_node ( full_font , base_font , base_subset_def , subset_definitions ) return base_font , patches # Update base_font to add all of the ift patch mappings to reach any of # subset_definitions and produces the associated patches. encode_node ( full_font , base_font , cur_def , subset_definitions ): patches = [] next_fonts = [] for each subset_def in subset_definitions not fully covered by cur_def : next_def = subset_def union cur_def next_font = subset ( full_font , next_def ) let patch_url be a new unique url add a mapping from , ( subset_def - cur_def ) to patch_url , into base_font next_font , patches += encode_node ( full_font , next_font , next_def , subset_definitions ) next_fonts += ( next_font , next_def , patch_url ) for each ( next_font , next_def , patch_url ) in next_fonts : patch = table_keyed_patch_diff ( base_font , next_font ) patches += ( patch , patch_url ) return base_font , patches
In this example implementation, if the union of the input base subset definition and the list of subset definitions fully covers the input full font, and the subsetter implementation used correctly retains all functionality then, the above implementation should meet the requirements in § 7 Encoding to be a neutral encoding. This basic encoder implementation is for demonstration purposes and not meant to be representative of all possible encoder implementations. Notably it does not make use of nor demonstrate utilizing glyph keyed patches. Most encoders will likely be more complex and need to consider additional factors some of which are discussed in the remaining sections.
§ 6.3 Glyph Keyed patches
Specifically because they are parameterized by code points and feature tags but can be applied independently of one another, § 6.3 Glyph Keyed patches have additional requirements and cannot be directly derived by using a subsetter implementation. However, such an implementation can help clarify what an encoder needs to do to maintain functional equivalence when producing this type of patch. Consider the result of producing the subset of a font relative to a given font subset definition. We can define the glyph closure of that font subset definition as the full set of glyphs included in the subset, which the subsetter has determined is needed to render any combination of the described code points and layout features.
Using that definition, the glyph closure requirement on the full set § 6.3 Glyph Keyed patches is:
-
The set of glyphs contained in the patches loaded for a font subset definition through the patch map tables must be a superset of those in the glyph closure of the font subset definition.
Assuming the subsetter does its job accurately, the glyph closure requirement is a consequence of the requirement for equivalent behavior: Suppose there is a font subset definition such that the subsetter includes glyph *i* in its subset, but an encoder that produces § 6.3 Glyph Keyed patches omits glyph *i* from the set of patches that correspond to that definition. If the subsetter is right, that glyph must be present to retain equivalent behavior when rendering some combination of the code points and features in the definition, which means that the incremental font will not behave equivalently when rendering that combination.
Therefore, when generating an encoding utilizing glyph-keyed patches the encoder must determine how to distribute glyphs between all of the patches in a way that meets the glyph closure requirement. This is primarily a matter of looking at the code points assigned to a segment and determining what other glyphs must be included in the patch that corresponds to it, as when a glyph variant can be substituted for a glyph included in the segment by code point. In some cases a glyph might only be needed when multiple segments have been loaded, in which case that glyph can be added to the patch corresponding to any of those segments. (This can be true of a ligature or a pre-composed accented character.) Finally, after the initial analysis of segments the same glyph might be needed when loading the patches for two or more segments. There are five main strategies for dealing with that situation:
-
Two or more segments can be combined to be contained within a single patch. This avoids duplicating the common glyphs, but increases the segment’s size.
-
The common glyphs can be placed in their own patch and then mapping entries set up to trigger the load of that common patch along side any of the segments that will need it. For example if 'c' is a common segment needed by segments 'a' and 'b' then, you could have the following mapping entries (via a format 2 mapping table):
-
subset definition a → segment a
-
subset definition b → segment b
-
subset definition a union subset definition b → segment c
-
-
In some cases, such as with Unicode variation selectors, there will be a modifier code point which triggers a glyph substitution when paired with many other code points. Given the large number of alternate glyphs it’s desirable to keep them in their own patches which are only loaded when both the modifier code point and appropriate base code point(s) are present. This can be achieved by using a Format 2 Patch Map and multiple subset definitions per entry via copyModeAndCount. For the entry one subset definition should contain the modifier code point and a second one has the base code point(s).
-
Alternatively, the glyph can be included in more than one of the patches that correspond to those segments at the cost of duplicating the glyph’s data into multiple patches.
-
Lastly, the common glyph can be moved into the initial font. This avoids increasing segment sizes and duplicating the glyph data, but will increase the size of the initial font. It also means that the glyph’s data is always loaded, even when not needed. This can be useful for glyphs that are required in many segments or are otherwise complicating the segmentation.
Pre-loading data in the Initial Font
In some cases it might be desirable to avoid the overhead of applying patches to an initial file. For example, it could be desirable that the font loaded on a company home page already be able to render the content on that page. The main benefit of such a file is reduced rendering latency: the content can be rendered after a single round-trip.
There are two approaches to including data in the downloaded font file. One is to simply encode the incremental font as a whole so that the data is in the initial file. Any such data will always be available in any patched version of the font. This can be helpful in cases when the same data would otherwise be needed in many different segments.
The other approach is to download an already-patched font. That is, one can encode a font with little or no data in the "base" file but then apply patches to that file on the server side, so that the file downloaded already includes the data contained in those patches.
When only one pre-loaded version of a font is needed these strategies will have roughly equivalent results, but the first is both simpler and in some cases more specific. However, when more than one pre-loaded font is needed the pre-patching approach will often be better. When using the first approach, one would need to produce multiple encodings, one for each preloaded file. When using the second approach, all of the preloaded files will still share the same overall patch graph, which both reduces the total space needed to store the patches and improves CDN cache efficiency, as all the pre-loaded files will choose subsequent patches from the same set.
Table ordering
In the initial font file (whether encoded as woff2 or not) it is possible to customize the order of the actual table bytes within the file. Encoders should consider placing the mapping tables (IFT and IFTX) plus any additional tables needed to decode the mapping tables (cmap) as early as possible in the file. This will allow optimized client implementations to access the patch mapping prior to receiving all of the font data and potentially initiate requests for any required patches earlier.
Likewise table keyed patches have a separate brotli stream for each patched table and the format allows these streams to be placed in any order in the patch file. So for the same reasons encoders should consider placing updates to the mapping tables plus any additional tables needed to decode the mapping tables as early as possible in the patch file.
Choosing the input ID encoding
The specification supports two encodings for embedding patch IDs in the URL template. The first is base32hex, which is a good match for pre-generated patches that will typically be stored in a filesystem. Base32hex encoding only uses the letters 0-9 and A-V, which are safe letters for a file name in every commonly used filesystem, with no risk of collisions due to case-insensitivity. Because the string is embedded without padding this format cannot be reliably decoded, so it may be a poor choice for dynamically generated patches. The other encoding is base64url, a variant of base64 encoding appropriate for embedding in a URL or case-sensitive filesystem. When using this encoding the id is embedded with padding so that the value can be reliably decoded.
The individual character selectors d1 through d4 are relative to the base32hex encoded id only. These are typically used to reduce the number of files stored in a single filesystem directory by spreading related files out into one or more levels of subdirectory according to the trailing letters of the id encoding. These will tend to be evenly distributed among the digits when using integer ids, but may be unevenly distributed or even constant for string ids. Encoders that wish to use string ids with d1 through d4 should take care to make the ends of the id strings vary. It is valid to mix d1 through d4 with a base64url-encoded id.
8. Privacy Considerations
8.1. Content inference from character set
IFT exposes, to the server hosting a Web font, information on the set of characters that the browser wants to render with the font (for details, see § 4 Extending a Font Subset).
For some languages, which use a very large character set (Chinese and Japanese are examples) the vast reduction in total bytes transferred means that Web fonts become usable, including on mobile networks, for the first time. However, for those languages, it is possible that individual requests might be analyzed by a rogue font server to obtain intelligence about the type of content which is being read. It is unclear how feasible this attack is, or the computational complexity required to exploit it, unless the characters being requested are very unusual.
More specifically, a IFT font includes a collection of unicode code point groups and requests will be made for groups that intersect content being rendered. This provides information to the hosting server that at least one code point in a group was needed, but does not contain information on which specific code points within a group were needed. This is functionally quite similar to the existing CSS Fonts 4 § 4.5 Character range: the unicode-range descriptor and has the same privacy implications. Discussion of the privacy implication of unicode-range can be found in the CSS Fonts 4 specification:
-
CSS Fonts 4 § 15.1 What information might this feature expose to Web sites or other parties, and for what purposes is that exposure necessary?. For especially privacy-sensitive-contexts this recommends the user agent download all web fonts in a document. For IFT fonts, utilizing § 4.7 Fully Expanding a Font will fetch the entire available IFT font without providing any information about the specific content present. Alternatively, in a privacy sensitive contexts a user agent could randomly select additional patches that are not required by the current content to provide obfuscation of what patches are actually needed.
8.2. Per-origin restriction avoids fingerprinting
As required by [css-fonts-4]:
"A Web Font must not be accessible in any other Document from the one which either is associated with the @font-face rule or owns the FontFaceSet. Other applications on the device must not be able to access Web Fonts." - CSS Fonts 4 § 10.2 Web Fonts
Since IFT fonts are treated the same as regular fonts in CSS (§ 2 Opt-In Mechanism) these requirements apply and avoid information leaking across origins.
Similar requirements apply to font palette values:
"An author-defined font color palette must only be available to the documents that reference it. Using an author-defined color palette outside of the documents that reference it would constitute a security leak since the contents of one page would be able to affect other pages, something an attacker could use as an attack vector." - CSS Fonts 4 § 9.2 User-defined font color palettes: The @font-palette-values rule
9. Security Considerations
One security concern is that IFT fonts could potentially generate a large number of network requests for patches. This could cause problems on the client or the service hosting the patches. The IFT specification contains a couple of mitigations to limit excessive number of requests:
-
§ 4 Extending a Font Subset: disallows re-requesting the same URI multiple times and has limits on the total number of requests that can be issued during the extension process.
-
Load patch file: specifies the use of [fetch] in implementing web browsers and matches the CORS settings for the initial font load. As a result cross-origin requests for patch files are disallowed unless the hosting service opts in via the appropriate access control headers.
10. Changes
Since the Working Draft of 30 May 2023 (see commit history):
- Complete rewrite of the specification. Separate 'Patch Subset' and 'Range Request' methods have been removed in favour of a single unified approach.
Appendix A: Default Feature Tags
This appendix is not normative. It provides a list of layout features which are considered to be used by default in most shaper implementations. This list was assembled from:
-
Features which are listed as "default on" in [enabling-typography]
-
Features which are in default set in the harfbuzz subsetter.
Layout Features used by Default in Shapers
Tag | Encoded As |
---|---|
aalt | 1 |
abvf | default |
abvm | default |
abvs | default |
afrc | 2 |
akhn | default |
blwf | default |
blwm | default |
blws | default |
calt | default |
case | 3 |
ccmp | default |
cfar | default |
chws | default |
cjct | default |
clig | default |
cpct | 4 |
cpsp | 5 |
cswh | default |
curs | default |
cv01-cv99 | 6-104 |
c2pc | 105 |
c2sc | 106 |
dist | default |
dlig | 107 |
dnom | default |
dtls | default |
expt | 108 |
falt | 109 |
fin2 | default |
fin3 | default |
fina | default |
flac | default |
frac | default |
fwid | 110 |
half | default |
haln | default |
halt | 111 |
hist | 112 |
hkna | 113 |
hlig | 114 |
hngl | 115 |
hojo | 116 |
hwid | 117 |
init | default |
isol | default |
ital | 118 |
jalt | default |
jp78 | 119 |
jp83 | 120 |
jp90 | 121 |
jp04 | 122 |
kern | default |
lfbd | 123 |
liga | default |
ljmo | default |
lnum | 124 |
locl | default |
ltra | default |
ltrm | default |
mark | default |
med2 | default |
medi | default |
mgrk | 125 |
mkmk | default |
mset | default |
nalt | 126 |
nlck | 127 |
nukt | default |
numr | default |
onum | 128 |
opbd | 129 |
ordn | 130 |
ornm | 131 |
palt | 132 |
pcap | 133 |
pkna | 134 |
pnum | 135 |
pref | default |
pres | default |
pstf | default |
psts | default |
pwid | 136 |
qwid | 137 |
rand | default |
rclt | default |
rkrf | default |
rlig | default |
rphf | default |
rtbd | 138 |
rtla | default |
rtlm | default |
ruby | 139 |
rvrn | default |
salt | 140 |
sinf | 141 |
size | 142 |
smcp | 143 |
smpl | 144 |
ss01-ss20 | 145-164 |
ssty | default |
stch | default |
subs | 165 |
sups | 166 |
swsh | 167 |
titl | 168 |
tjmo | default |
tnam | 169 |
tnum | 170 |
trad | 171 |
twid | 172 |
unic | 173 |
valt | default |
vatu | default |
vchw | default |
vert | default |
vhal | 174 |
vjmo | default |
vkna | 175 |
vkrn | default |
vpal | default |
vrt2 | default |
vrtr | default |
zero | 176 |
Appendix B: Extension Algorithm Example Execution
This appendix provides an example of how a typical IFT font would be processed by the § 4.3 Incremental Font Extension Algorithm algorithm. In this example the IFT font contains a mix of both § 6.2 Table Keyed and § 6.3 Glyph Keyed patches.
Initial font: contains the following IFT and IFTX patch mappings. Note: when features and design space sets are unspecified in a subset definition they are defaulted to an empty set.
Table = "IFT " | Compatibility ID = 0x0000_0000_0000_0001 | |
---|---|---|
Subset Definition(s) | URI | Format Number |
code points: { 'a', 'b', ..., 'z' } | //foo.bar/01.tk | 2, Table Keyed - Partial Invalidation |
code points: { 'A', 'B', ..., 'Z' } | //foo.bar/02.tk | 2, Table Keyed - Partial Invalidation |
code points: { '0', '1', ..., '9' } | //foo.bar/03.tk | 2, Table Keyed - Partial Invalidation |
code points: { 'a', 'b', ..., 'z', 'A', 'B', ..., 'Z' } | //foo.bar/04.tk | 2, Table Keyed - Partial Invalidation |
code points: { 'a', 'b', ..., 'z', 'A', 'B', ..., 'Z', '0', '1', ..., '9' } | //foo.bar/05.tk | 2, Table Keyed - Partial Invalidation |
Table = "IFTX" | Compatibility ID = 0x0000_0000_0000_0002 | |
---|---|---|
Subset Definition(s) | URI | Format Number |
code points: { 'a', 'b', ..., 'm' } | //foo.bar/01.gk | 3, Glyph Keyed |
code points: { 'n', 'o', ..., 'z' } | //foo.bar/02.gk | 3, Glyph Keyed |
code points: { 'A', 'B', ..., 'M' } | //foo.bar/03.gk | 3, Glyph Keyed |
code points: { 'N', 'O', ..., 'Z' } | //foo.bar/04.gk | 3, Glyph Keyed |
code points: { '0', '1', ..., '9' } | //foo.bar/05.gk | 3, Glyph Keyed |
Optimized Extension Example
Note: this example execution is described as it would be implemented in an optimized client which aims to reduce the number of round trips needing to be made to extend the font for a target subset definition. This optimized execution is fully compliant with the extension algorithm but loads some URIs (the glyph keyed URIs) earlier than specified in the extension algorithm. This is allowed because these patches will not be invalidated by the preceding table keyed patch.
Inputs:
-
Initial font with the IFT and IFTX tables as defined above.
-
Target subset definition: code points = {'f', 'P'}
Iteration 1:
-
Steps 1 through 6 - applying the Check entry intersection check to all entries in the initial font, the following patch entries intersect:
-
//foo.bar/01.tk
-
//foo.bar/02.tk
-
//foo.bar/04.tk
-
//foo.bar/05.tk
-
//foo.bar/01.gk
-
//foo.bar/04.gk
-
-
Step 8 - one entry from that set must be picked. There a no full invalidation entries, so one partial invalidation entry (//foo.bar/*.tk) must be selected. Following § 4.4 Selecting Invalidating Patches the candidate entries have the following intersections with the target subset definition:
-
//foo.bar/01.tk - {'f'}
-
//foo.bar/02.tk - {'P'}
-
//foo.bar/04.tk - {'f', 'P'}
-
//foo.bar/05.tk - {'f', 'P'}
The intersections for //foo.bar/01.tk and //foo.bar/02.tk are both strict subsets of //foo.bar/04.tk and //foo.bar/05.tk, so they don’t meet selection criteria. That leaves either //foo.bar/04.tk or //foo.bar/05.tk. Since //foo.bar/04.tk is listed first in the patch map it is selected.
-
-
Step 9 - the selected patch, //foo.bar/04.tk, is fetched. Additionally as an optimization fetches are simultaneously started for the two glyph keyed patches //foo.bar/01.gk and //foo.bar/04.gk. The table keyed patch //foo.bar/04.tk is partially invalidating which means the two glyph keyed patches will be remain valid after the application of //foo.bar/04.tk. Thus the client can expect that the two glyph keyed patches will be required in future iterations and begin the loads for those two patches at this time.
-
Step 10 - The fetched patch //foo.bar/04.tk is applied to the initial font. This patch updates all tables except for glyf and loca to add support for code points 'a' through 'Z'. Additionally the mapping in the "IFT " table is updated to the following:
Table = "IFT " Compatibility ID = 0x0000_0000_0000_0006 Subset Definition(s) URI Format Number code points: { '0', '1', ..., '9' }
//foo.bar/08.tk 2, Table Keyed - Partial Invalidation Note the following changes:
-
All entries that have data for 'a', ... 'z', and 'A', ..., 'Z' have been removed. Leaving just one entry for the remaining '0', ..., '9' code points.
-
The font binary has changed and thus the previously listed table keyed patches are no longer valid. As a result the compatibility ID has changed and the URI for the remaining '0', ..., '9' entry has changed. The patch at the new URI, //foo.bar/08.tk, is compatible with the updated font.
-
Iteration 2:
-
Steps 2 through 6 - applying the Check entry intersection check to all entries in the updated font from the previous iteration, the following patch entries intersect:
-
//foo.bar/01.gk
-
//foo.bar/04.gk
-
-
Step 8 - one entry from that set must be picked. Only no invalidation patches remain, the client is free to pick either one. In this case it selects the first one //foo.bar/01.gk.
-
Step 9 - the fetch for //foo.bar/01.gk was previously started during iteration 1.
-
Step 10 - The fetched patch //foo.bar/01.gk is applied to the current font subset. This patch adds glyph data for code points 'a' through 'm' to the glyf and loca tables. Additionally the mapping in the "IFTX" table is updated to removed the entry for //foo.bar/01.gk. The compatibility ID and URIs for other entries are unchanged.
Iteration 3:
-
Steps 2 through 6 - applying the Check entry intersection check to all entries in the updated font from the previous iteration, the following patch entries intersect:
-
//foo.bar/04.gk
-
-
Step 8 - one entry remains, it is selected.
-
Step 9 - the fetch for //foo.bar/04.gk was previously started during iteration 1.
-
Step 10 - The fetched patch //foo.bar/04.gk is applied to the current font subset. This patch adds glyph data for code points 'N' through 'Z' to the glyf and loca tables. Additionally the mapping in the "IFTX" table is updated to removed the entry for //foo.bar/04.gk. The compatibility ID and URIs for other entries are unchanged.
Iteration 4:
-
Steps 2 through 6 - there are no remaining intersecting entries so the algorithm terminates. The font subset is returned and now ready to render any content covered by the target subset definition.