W3C

– DRAFT –
Publishing community group plenary - parallel content in EPUB

26 February 2026

Attendees

Present
:, DaleRogers, GeorgeK, gregoriopellegrino, hadrien, jonathan, LaurentLM, manfred, rdeltour, Simon_M, sueneu, wolfgang
Regrets
-
Chair
wolfgang
Scribe
gautierchomel_, Gautier

Meeting minutes

wolfgang: parallel reading happens in many books.

wolfgang: let think about ancient greek and german on two facing pages.

DaleRogers: i authored poetry comic ebooks, first pages are a poem and it's comic interpretation in front. Those are different mediums, I don't know how to compare them. I am not sure they can be parallelised.

GeorgeK: in early days of epub we had multiple rendition, it was not implemented, too complex. It provided the mecanisme let say to ahe a fixed layout and a braille version. Having a parallel reflowable to a fixed layout, that would help for accessibility of children books. There is desire to have different experiences of a same content.

Hadrien: I don't like the word parallel, it confuses with media overlays. In multiple rendition work there was a concept of mapping. I think this is what we are talking about here. The example we can test is newspapers and magazins, several newspaper apps opens on a fixed layout view and by entering an article we get it reflowable. That's a mapping in my opinion. They are complementary but differents.

wolfgang: two perspectives on the same content, latin and english, you need to see both together. It's not an alternative, it's meant to be equivalent as far a s translation can go.

Hadrien: I think it is the same problem, for translations or formats, it's a mapping sentence to sentence. The UX and option to display them as parallels or switch between is possible by mapping.

wolfgang: I think we need a fine granualrity, not only at article level.

Hadrien: agree, the article is an example only, enven in this case you can have different granularity.

manfred: I am from SBS swiss library for the blind, we have two use cases, we produce books with media overlay heavy used by dyslexic people. Other usecase is that we use the new ebraille standard and want to combine it with multiple rendition so one can switch from dynamic braille to tts, all in an epub file. The daisy pipeline allows us to produce those files, not in production yet but soon.

DaleRogers: different people have different needs, it is great to be able to adress those diferent needs. but comic has several layers of meaning, to get a computer to parse that is overwelming. But I can authorize that history in a different format, as an adaptation.

GeorgeK: bookshare has the search per format epub, audio, daisy, converted on the fly for the user need. Computers are already doing this. Back to ebraille, it's difficult to convert text to braille, we get better quality with preconverted braille where you can spot errors at the production step.That's why we want to be able to pack it with text and other formats.

Hadrien: the expectation of media overlay is to be consumed together, but ebraille and text is different, it's a switch, not a combined consuming.

manfred: even in braille you could read with finger and ears at the same time. There are complexities with screen readers, but yes, the use case is both switch or consume at the same time.

manfred: ebraille and contracted braille back converted to text is also tricky. It is much better to keep both separate.

manfred: multiple rendition with rendition mapping will be more effective than smile and media overlay but lacks supports, that's why we speak with reading system developpers.

wolfgang: where that mapping would seat?

Hadrien: I was part of the multiple rendition group. I can explain, it is mapped at container.xml level. But this mapping could also be done at a lower level, to level down the difficulty.

hadrien: the mapping could be html, xml, json, i don't see technical difficulty.

Hadrien: an option is to build on multiple rendition, it exist, we only need implemntations.

GeorgeK: we miss the two implementations to get this a standard.

DaleRogers: embedding two version in one only file, could make a big file, If I have images and audio. Is that something we should consider?

Hadrien: i don't think size is an issue.

Hadrien: The problem with multiple rendition, is that it force authors to have multiple opf, like 5 epubs in an epub. That could be simplified, it is not a different book, it is another manifestation of the same content.

Hadrien: textbooks are also a good example. Reading on a mobile when the unit is a spread fixed, it's a problem.

GeorgeK: Canada as two official languages, there are also all over the world need for ways to have two languages side by side. Thta's a valid commercial usecase.

wolfgang: agree, and more specificaly in education, native language compared to so many contents in english

sueneu: I know AI has become better at translation, will reading systems be able to translate on the fly?

gautierchomel_: we see there are many use cases, people are already doing this in different industries, educational and newspape. So what's the way to go forward ? This is not about incubation in my feeling, this is about interoperable implementation.

DaleRogers: I have questions about UX and how to author that.

Hadrien: that's a reading system burden. I see different affordance, at the opening of the book or at any time of the reading, buttons, menus, split views are valid affordances. It's up to reading systems,

Hadrien: to make this work we need to agree on how to do it before implementation. Either we have something we can use in the spec or what's in the spec is not enought.

wolfgang: we need colleagues to offer an example implementation to answer the question about is the spec enough.

Hadrien: multiple rendition has not made people happy. We may want to have a lighter thing. We need different spines and ways to categorise and identify them and then we need mapping. This is possible in multiple rendition but many other things are allowed by this spec.

LaurentLM: mulitple rendition has been tried and stayed as a prototype. I would be interested to know why, what were the oppsoitions? With such a list we may remove every pain points and add the missing to end with something workable. We have problem with multiple metadatas, ambiguities, and also a problem with mapping. In media overlay smile was choosen, it's XML, but we dobn't even use the full smile. Maybe we should discuss the

language for mapping, maybe json instead of xml. but first we need to know why it was not implemented.

Hadrien: I still think it is better to have different spine and only one opf.

wolfgang: it would be good to have all the use cases listed also. with requirement, so we can move forward and have the work followed up.

Minutes manually created (not a transcript), formatted by scribe.perl version 248 (Mon Oct 27 20:04:16 2025 UTC).

Diagnostics

Active on IRC: DaleRogers, gautierchomel_, GeorgeK, Hadrien, LaurentLM, rdeltour, Simon_M, sueneu, wolfgang