This document describes a set of use cases for the Music Notation Community Group. It intentionally refrains from talking about the substance of any specification and is focused purely on scenarios concerning users and content. In effect, these cases are a lens to help us focus on where existing specifications should change, and to identify ideas from other standards efforts that can be applied.
The inclusion of a use case here does not mean it is considered to be in scope.
1.1. Concordance with MNX Proposal
Where applicable, use cases are annotated with a Support: item that links back to demonstrable support in the MNX overview.
2. User Roles
These are musical roles assumed by someone using music notation.
The following roles are not clearly defined yet, but can help us begin collecting use cases based on an intuitive understanding of the terms. They are roles, not distinct people: a single person can occupy multiple roles in the same use case.
Instructor or Teacher (T)
Sound Engineer (SE)
In the use cases below, users playing roles are typically represented by short capitalized abbreviations as shown above, sometimes followed by a distinguishing number.
A lesson from other media types is, as the media moves from paper publishing to electronic publishing, people can starting taking on roles they would before have left to a specialist, people can take on more and more roles, and publishing can be done on a smaller scale and with smaller, more frequent cycles than before. For example, in a printed magazine it’s likely that there would be a clear distinction between the roles of Writer (think composer), Editor, Engraver, and Publisher. The Writer is not concerned with appearance and font choice. The Editor and Engraver are specialists, and different people than the writer. The Publisher has a large role, because the cost of publishing is high. In contrast, with a blog, the same person will choose the words (Writer), review the words (Editor), choose the appearance and fonts (Engraver), and decide to publish (a trivial vestige of the Publisher role). We should expect the same transformation of workflow, and merging of roles, in music notation, as it becomes digital and on-line.
3. User Stories
These are narratives of musical activities performed by one or more parties assuming the User Roles described above. Each one is identified by a category prefix and a unique number.
Please add new stories with the next unused number in the category. Do not delete obsolete stories; instead mark them as such.
3.1. Music Creation
3.1.1. MC0: Composer wants to notate a composition and capture it as a digital encodingC is composing a work and wishes to produce a digital document representing the contents of that work. The semantic content of the document is paramount as it represents C’s musical creation. Depending on C’s orientation, the nature of the work, the intentions regards publishing and performance, and the nature of the tool being used to do the encoding, visual and performance facets of the document may also be significant.
3.1.2. MC1: Composer wants to share work with a collaborator using another editing applicationC1 is co-composing a work with C2, with the locus of editing switching back and forth between them. C1 and C2 are not using the same notation application. Preservation of semantic, visual and performance data are all important in this case, and the less faithful the transfer, the more work must be done by hand by each party to recover or adjust information that is lost or corrupted.
C1 is working on a compositional sketch that will subsequently be orchestrated by C2. C2 is not using the same notation application as C1, but C2 would like to begin C2’s work by modifying C1’s score as a starting point to ensure that there are no copying errors. In this case, the preservation of semantic and performance data is primary, while visual details may not matter as much.
3.1.3. MC2: Composer wants to migrate work to another editing application, or archive music as a protection against the future need to migrate.C may be using a notation application that has become obsolete and wants to begin using a new application. Alternatively, C is concerned that C’s notation application may cease to be supported on modern computer hardware and operating systems, preventing C from revising C’s work. C would then either need to re-enter the music in another program, or keep an old hardware / software setup around to run the old program. C wants to archive C’s music in a format that maintains as much of the semantic, visual, and performance data that was captured in the original application as possible. Then C always has the option to transfer the music to a new program that can read the archival format. C wants to minimize the amount of rework needed, but an absolutely perfect transfer is not necessary. C doesn’t want to lose the visual information for the portions of the music that won’t change, and doesn’t want to lose the playback information that will help C when making revisions.
To be useful, the transfers will need a high level of accuracy and completeness. Furthermore, C needs assurance that the archival format has enough stability and longevity to be readable in the future.
3.1.4. MC3: Arranger/performer wants to convert existing printed sheet music into digital sheet musicPF has custody of a printed edition of some music, and no access to any digital encoding of the work. PF would like to work with the music in a notation application to adapt it to their purposes, perhaps transposing to a more playable key. OMR software is usually developed independently of notation software, so PF’s OMR tools produce an intermediate digital format that can be imported into multiple notation applications. This format should preserve as much as possible of what the OMR software could discover, including both semantic and visual data from the original document.
Note: Use-cases of the new "Music Transcription" category extend or replace this case.
3.1.5. MC4: Editor wants to compare two versions of digital sheet music to confirm they are the same, or see differencesED receives two different digital sheet music documents which purport to be the same score, maybe from two different proofreaders. ED runs comparison software on the two documents. Comparison can include or ignore differences in layout (focussing only on unrolled note sequence), in key signature (focussing only on pitches no matter how notated), in per-staff and score-wide annotations, in performance annotations, etc. In any case, implementation details like differences in insignificant whitespace in the music notation, are ignored. Comparison software returns a concise statement that the documents are identical, or a concise statement of what are the differences (show the content of 2 measures added to all parts after a certain measure, or the contents of a new staff throughout the piece in one document but not the other).
Support: MNX, to the extent that non-semantic differences can be easily ignored, and semantic encoding is mostly forced to one canonical representation that is easier to compare than MusicXML. Some preprocessing of scores is likely needed to support a truly robust comparison application, though.
3.1.6. MC5: Arranger/Orchestrator wants to annotate Composer’s score with comments and suggestionsC has created a composition and has engaged A to arrange and orchestrate this piece. C provides A with a digital encoding of the composition. A wishes to annotate C’s composition with comments and suggestions. Some of A’s suggestions include visual and audio media of performance techniques that may be appropriate for the music. Others include hyperlinks to relevant online resources. A returns these annotations to C.
In general A’s suggestions could incorporate any content or media or pointers that might be found in a web page, and should not be limited to plain text or even rich formatted text.
To avoid issues with converting the entire score back and forth, A wishes to separate these annotations from C’s original score, capturing them a distinct document that refers to elements of C’s original score using pointers. Consider related use cases for Web Annotation.
3.1.7. MC6: Composer wants to specify positioning of notational elements for visual clarityC has composed a piece and has taken pains to arrange notes and other elements for visual clarity by manually adjusting their positions. These layout decisions are desirable to reproduce in some contexts, such as printed output that replicates the page size and orientation in which the C was originally working. In other contexts such as display on a mobile device, these decisions should be disregarded in favor of re-flowing and otherwise adjusting the layout to fit the context.
3.1.8. MC7: Composer wants to specify playback of notational elements for aural clarityC has notated a piece in an application that allows non-notated performance details of the piece to be captured for more expressive and accurate playback, such as tempo variations and specific note durations. These performance details are desirable to reproduce in some contexts, for instance presentation to a performer learning the piece. They are irrelevant to others, such as printed output.
Revisions to the piece by another composer or editor C2 may result in these performance details becoming irrelevant or requiring deletion.
3.1.9. MC8: Composer wants to interleave textual and musical content.C has composed a strophic song in which one verse of lyrics are fully notated within the music, and all the others are supplied in a sequence of pure-text verses. C wishes to create a digital encoding of this song that incorporates the other verses in a way that is aware of their status as numbered song verses, rather than arbitrary text.
3.1.10. MC9: Composer wants to distinguish text associated with a whole ensemble (e.g. tempo indications) from text associated with a single part (performance instructions for a single instrument).C is writing a piece for large ensemble. Some textual directions are associated with the entire score and are intended to be reproduced in every part. Other directions are part-specific. Digital encodings may be used downstream by C’s publisher PU to create both individual parts and the full score, so this distinction is essential to show the correct text in the correct context.
Support: system notations
3.1.11. MC10: Composer wants to embed multiple temporal renderings of a new piece in the score.C wants to reduce rehearsal time by demonstrating its performance practice to a performer PF.
Support: interpretation, with multiple stylesheets or media queries to distinguish the renderings
3.1.12. MC11: Sound Engineer wants to use an interactive score to access low level information in a DAW.Clicking on a symbol opens a dialog giving the SE access to (and control over) much lower level information. The use of arbitrary symbols in the DAW’s GUI allows the D of the DAW to create more expressive/useful scenarios than can be achieved using the currently ubiquitous space-time icons.
3.1.13. MC12: Composer wants to use a custom symbol to describe some special event.C wants the score to be as visually expressive as possible, and supplies a custom graphic object.
3.1.14. MC13: Composer of a multimedia work wants to write a score that synchronises sound and vision.Digital encoding (see MC0) can refer to both audio and visual data. Note also, that the C of the sounds need not be the same as the C of the lighting. (see MC1)
3.1.15. MC14: Composer wants to use software that allows the use of both Common Western Music Notation and the Web MIDI APICWMN uses a time model based on tempi and fractional durations. The Web MIDI API has no concept of tempo, and uses indivisible durations (milliseconds). The software handles the conversion from fractions to integers.
3.2. Music Transcription
3.2.1. MT0: Editor wants to manually transcribe sheet music to digital encoding formatE has access to some sheet music, either a physical sheet of paper or a scanned image of it. The sheet can contain hand-written music or printed music. E reads the source music and manually enters a corresponding transcription. The digital output should preserve semantic data of course and perhaps performance data as well. Visual data is not essential, especially if the source was hand-written music.
Note: in a variant of this use-case, multiple users playing an E role on Internet are provided with the image of say just one simple measure and prompted for its manual transcription (this is rather similar to Google "text reCAPTCHA").
3.2.2. MT1: Editor wants to transcribe printed sheet music to digital encoding format with help of OMR softwareE does not enter the whole transcription from scratch but works in two steps. In step 1, an OMR software reads the source image and provides an annotated transcription. In step 2, E reviews OMR output and manually corrects the wrong or missing items in the final digital encoding. This approach is interesting only when the efforts spent in step 2 are significantly lower than in MT0, and this depends on many factors, notably initial image quality, music complexity, OMR recognition rate, OMR interaction abilities.
OMR outputs need to provide hints for user review: confidence in each symbol, abnormal voice durations, detected incompatibilities, missing barlines, significant objects with no interpretation, etc, in order to call user attention on these points. Visual data is key to easily relate any given music symbol to the precise location in the initial image.
E should use an "OMR UI" specifically meant for OMR validation / correction. As opposed to standard music editing UI, such OMR UI should focus on fidelity with initial image, avoid any over-interpretation of user actions, even switch off validation while a series of user actions is not explicitly completed.
Support: MNX for the encoding exclusive of any OMR processing. Styling permits E to perform wholesale changes in appearance in step 2 by changing stylesheet information.
3.2.3. MT2: Editor wants to transcribe sheet music to digital encoding format without manual interventionThis can be seen as a variant of MT1 without step 2 (review). However, it must be considered as a use-case on its own because, for large libraries with millions of pages, having human beings spend several minutes on each page review is out of reach. See SIMSSA project regarding the use of OMR (not perfect) data as a hidden navigation layer on top of source image display.
A side advantage in by-passing human review, is that is allows to re-launch at minor cost a campaign of transcription if significant progress is made in OMR technology. Such progress is helped by the openness and modular architecture of the OMR pipeline software.
Support: MNX for the encoding exclusive of any OMR processing.
3.2.4. MT3: Many editors help improve OMR service via collaborative OMR over the web.This use-case extends on MT1 when used over the web on a shareable OMR service: In this approach, each user reviewing action, whether it’s a validation or a correction, is linked back whenever possible to the underlying OMR item:
If we do have an identified OMR item, then it can be recorded as a representative sample (physical appearance and assigned shape). Samples are accumulated and later used to asynchronously improve the training of shape classifiers. A value commonly accepted in today’s deep learning projects is to have sets of at least 5000 samples per shape. Such numbers would be easily reached with this collaborative approach.
If we don’t have a clear OMR item identified, then the user could be prompted to select a case in a list of typical errors. We could that way increment a tally of typical errors with their concrete context. Later, an OMR developer could select one among the most common errors and have immediate access to all the related concrete examples.
Support: MNX for the encoding exclusive of any OMR processing.
3.3. Music Publishing
3.3.1. MP1: Composer wants to submit his music for publicationC is working on a complete composition that will subsequently be edited by ED to prepare for print publication. ED is not using the same notation application as C, but ED needs to begin with exactly what C produced in terms of semantic and visual details. Performance data in this case matters less.
This case benefits from standardizing the way a format is employed by composers and engravers, allowing publishers to accept machine-readable submissions and check them for conformance to some set of publication guidelines. In contrast, formats that permit alternative ways to express the same musical concept are harder to check.
Support: MNX, with ED using the styling and interpretation layers to add to the semantic layer, driven by default choices in ED’s house stylesheet.
3.3.2. MP2: Editor wants to annotate Composer’s work as part of publication workflowC composes a piece and notates it by hand, submitting the manuscript to an editor ED. ED prepares a digital edition of the music and sends to reviewer R for proofreading as a digitally encoded file, along with a scanned copy of the manuscript as a reference. R edits the digital sheet music document to correct errors, using R’s own notation software (which is different from ED’s). If the corrections include additions or removals, document-level features like page numbers, headers, and footers adjust accordingly. The document is returned to ED for publishing, with no loss of semantic, visual or playback information.
3.3.3. MP3: Publisher wants to keep machine-readable representations of music in a central content management systemPU manages a large repository of works intended to be published in a variety of editions over time. PU must be able to rely on the durability of the digital encoding format used for the works and its independence from present-day proprietary technology. The format must be widely available as an export format from notation applications and also as an import format into other applications used by music consumers.
3.3.4. MP4: Publisher wants to prepare digital editions that can be viewed on any device and also printedPU manages a large repository of works intended to be published in a wide variety of formats, ranging from print/PDF to interactive digital presentation on multiple devices, some of which may be unknown at present.
Different devices and presentation channels may want to format the music quite differently from the way the original composer or arranger viewed it. PU therefore wants to ensure that semantic information is reliably available for dynamic rendering of notation appropriate to the user’s device and presentation. Visual information for scores is available where relevant (e.g. print edition in same format as engraved). Playback information is also needed where relevant, for example to drive playback or assessment in music learning applications.
Support: MNX with PU’s stylesheets supplying styling, using media queries as needed to tune style to delivery channel.
3.3.5. MP5: Engraver wants detailed control over non-semantic formatting for printed output, while allowing for more flexible rendering on arbitrary devices e.g. mobile screensAn engraver EN is preparing a score for publication by PU. EN’s job is to make skilled human formatting decisions that apply to some set of valid contexts, while recognizing that these same decisions may be set aside in other contexts where dynamic rendering of the music may take a different course. For example, EN wants to preserve page and system break decisions for print editions, while recognizing that reflowing for mobile devices may result in different decisions.
Furthermore, EN sees that a single notational element (such as a page break) can sometimes be semantic. EN may want one system break to be imposed by the start of a coda, and another to be driven by readability of the score in a known, specific print format.
EN may also want detailed control of over some non-semantic formatting elements for for digital output as well as printed (e.g. positions of accidentals and dots in a chord, beam angles, etc). Furthermore, EN also wants some control over elements that can break due to reflow, and some control over where systems can break even when it’s non-semantic, e.g. breaking at a particular point is highly discouraged, but possible if that’s the only option.
Support: MNX with PU’s stylesheets supplying styling, using media queries as needed to tune style to delivery channel.
3.3.6. MP6: Publisher wants to decouple semantic formatting of notes and text from physical formattingPU wishes to maintain a repository of scores that can be published in multiple styles. For instance, a show tune might be published in a Jazz font in a collection of lead sheets, but in a more standard music font as part of a song folio. At a more detailed level, PU may wish to use a particular set of text styles to distinguish tempo indications, expression and performance text in one edition, but a different set of styles in a different edition. PU might even wish to completely suppress certain elements in some editions.
PU would like to be able to create these renderings by combining any given score with a separate specification of how that score is to be styled. This would allow the semantic information in scores to be uncoupled from stylistic decisions that will vary from place to place, and also from time to time. As a long-time publisher PU is aware of the substantial aesthetic changes that have taken place in music notation over many years.
Support: MNX with various external stylesheets as needed.
3.3.7. MP7: Publisher wants to create multiple foreign-language editions of a workPU wishes to simplify the creation and maintenance of a work in multiple languages, minimizing the amount of work necessary to independently revise either the original notated music or the various translations of directions, lyrics, etc.
3.3.8. MP8: Publisher wants to minimize difficulty of managing related material for a work in a unitary fashionPU maintains multiple assets relating to the same musical work. For instance, one composition is associated with a digital music encoding, a PDF, an audio backing track, a sequence in a MIDI file, and more. PU would like to be able to cross-reference these items, potentially linking to related assets in some way from metadata in the music encoding file. Some of this metadata is unique to PU’s business and does not fit into any standardized metadata schema.
3.3.9. MP9: Publisher wants machine-readable identifiers for Composers, Works, in an assetThe publisher PU has traditionally identified the composer of one of their published works by human-readable text, which can differ by language (e.g. "Tchaikovsky", "Tschaikowski", "Чайко́вский"). The title of a work has also been identified by a string of human-readable text, which can differ by convention as well as language (e.g. "Sonata for Piano no. 6 in F major, op. 10 no. 2", "Opus 10 No. 2 Piano Sonata #6"). PU wants to disambiguate this, by tagging the digital score with identifiers to reliable authorities. This helps especially for automated workflows, distribution of electronic goods, and search engine visibility. Reliable, widely-used authorities include MusicBrainz, Wikidata, and Authority Control. Other projects like IMSLP and Wikipedia also link to these authorities, so it is not necessary to put identifiers from every project in the world into every score.
3.3.10. MP10: Publisher wants machine-readable representation of intellectual property rights in an asset
A publisher PU1 wishes to avoid the cost and complexity of developing PU1’s own private IP rights infrastructure and would prefer to incorporate references to an IP authority of some kind. Nascent standards seek to capture IP rights; one example is \ODRL.
Another publisher PU2 maintains their own private and confidential rights information such as royalty splits. PU2 would like to incorporate this information directly into the encoding, or refer to it via a pointer referencing PU2’s private database.
3.3.11. MP11: Publisher wants to be able to present user-visible notice of credits, copyrights in a way that may vary depending on device form factorA publisher PU is legally required to include credits and copyrights within renderings of works in their catalog. PU must be able to reliably identify such information as semantically distinct from other arbitrary text in the document that has legal and business significance. The presentation of this information to end users can vary greatly. For example, PU has a mobile score-reading app that shows credits and copyrights in an overlay panel that is only displayed at the user’s request.
3.3.12. MP12: Publisher wants to protect their intellectual property.PU wishes to encrypt musical scores in their catalog in such a way that readable copies are only distributed to users who have permission from the publisher or publisher’s representative, e.g. a distributor.
3.3.13. MP13: Publisher wants to automatically create a musical simplification of a full scorePublisher PU creates scores that are consumed by a mixture of music readers and non-readers. PU wishes to be able to create simplified versions of fully notated scores. One such example is the "chord sheet" which contains only the lyrics and chords associated with a song. Often the lyrics and chords are sufficient for non-readers to perform the work if they know the melody already but only need help with the words, or if they are accompanying on a chordal instrument.
3.3.14. MP14: Publisher wants to identify an excerpt or incipit of a workPU publishes scores in collections whose tables of contents include short renderings of a readily identifiable fragment of each score, for easy reference by performers PF. The parts and range of material included in these excerpts need to be identifiable as part of the digital publication process.
3.3.15. MP15: Publisher wants to encode a organized collection of works in a single documentPU publishes a collection of related works by the same composer; these works further are broken into movements. PU wishes to treat all the elements of this collection within a single document under unitary management, and to apply house styles across the collection in a consistent way.
Support: compound MNX documents
3.3.16. MP16: Publisher wants to convert existing sheet music into a digital editionPU has custody of a printed edition of some music, and no access to any digital encoding of the work. PU would like to work with the music in a notation application to create a digital document as the basis for ongoing editorial work and future editions. See notes on MC3 above regarding OMR technology.
3.4. Reading, Learning and Performance
3.4.1. RLP1: Performer wants to reformat sheet music to her mobile device’s displayPerformer PF is reading a trumpet part on a mobile phone in a practice room, and wants to view the part in a way that is optimal given the small size of the display. The part is isolated from a full score and must be reformatted with system breaks completely different from the original and corresponding changes in notational spacing. These changes render any visual non-semantic formatting in the original document irrelevant (or even destructive) to PF’s experience.
Support: system and page flow
3.4.2. RLP2: Performer wants to reformat sheet music as per personal preference (e.g. size, page turns, font choice)PF uses a tablet computer to view a piano composition for practice and performance. PF is visually impaired and wishes to view the score at twice the normal size. PF wishes the score to be reflowed accordingly, and further wants to choose the optimal location of page turns in the piece to suit their personal musical needs.
Support: system and page flow
3.4.3. RLP3: Performer wants to find a particular piece of music to playPF wishes to search a corpus of digitally encoded music. The publisher of a searchable index PU wishes to easily index large numbers of such digital encodings and reliably extract the material to be indexed.
3.4.4. RLP4: Performer wants to view only his own musical part from a larger ensemble scorePF is performing a flute part that is dynamically rendered from a digital encoding of an orchestral score. PF’s part is not presented identically to the Flute staff of the full score: multirests are included to span silent passages, some key signatures and accidentals are spelled in a specific way, and repeat endings exist in the flute part where the full score does not itself repeat. System and page breaks are specific to the part.
Support: CWMN score structure
3.4.5. RLP5: Performer wants to transpose music to suit a specific performance situationPF is performing a piece for Bb clarinet but only has PF’s A clarinet on hand. PF wants to be able to transpose the rendering of PF’s score by a semitone up, preserving notational decisions that can survive this transposition (such as whether a note carries an explicit accidental or not).
3.4.6. RLP6: Performer wants to view her own musical part formatted more prominently as part a larger ensemble scorePF is an alto singer in a chorus. PF receives a digital piano-vocal score, with 4 vocal parts (SATB). PF is accustomed to reading choral works with all parts visible. PF would like to view the score with the alto part rendered in full-size notes with a yellow highlight behind, with the other vocal parts rendered in smaller notes on the same system.
3.4.7. RLP7: Performer wants to hear a synthesized audio rendition of the music being displayed, synchronized with the notationPF is practicing a bassoon part in an orchestral work. At times PF would like to hear a synthesized audio rendition of PF’s own part, for reference. At other times PF would like to play along with an audio rendition of all the other parts in the score. In both cases, PF expects the currently audible place in the score to be visually identified in some way. PF doesn’t have an expectation that these audio renditions will approach the richness and fidelity of a recording of an actual orchestra.
3.4.8. RLP8: Performer wants to hear a recorded audio track of the music being displayed, synchronized with the notational displayPF is practicing a bassoon part in an orchestral work. PF would like to hear a recorded audio track of a high quality performance of PF’s part and/or other parts in the work for reference. In both cases, PF expects the currently audible place in the score to be visually identified in some way.
The publisher PU for this piece has previously prepared an encoding of the piece along with an audio or video recording, which does not follow a uniform tempo. PU was able to create (automatically or manually) an encoding of the mapping between the musical form of the piece and regions of the audio recording, allowing the two to be synchronized. PF’s application is likewise able to employ this same mapping to maintain the correspondence.
3.4.9. RLP9: Performer wants to hear a synthesized audio rendition of the music being displayed, where the musical form is not present in a standard representationPF is playing a piece in which the order of playback of measures in the score is not completely determined by symbolic notation or common textual directions. The piece includes unusual repeat instructions (as text in the native language of the composer). Yet, the form was actually known to the composer and is accurately captured in the digital encoding of the piece alongside the textual, so that PF hears the playback order as the composer intended.
3.4.10. RLP10: Performer wants to listen at slower tempo than written to focus on a difficult portion of musicPF wishes to practice a piece at half tempo in some portions, to aid in learning a number of fast passages.
Support: MNX, interpretation
3.4.11. RLP11: Performer wants to have their performance of some music assessed with respect to the score as a referencePF would like to practice a part of a musical score while listening to a metronome or backing track, using an application that not only displays the score but assesses the correctness and completeness of PF’s performance and provides feedback to PF. This application uses a digital encoding of the score as the basis for assessment, relying on encoded information in the score to determine a rubric for assessment.
3.4.12. RLP12: Performer wants to view a score with all form repeats and jumps “unrolled”PF would like to practice a musical score on a digital display and avoid the inconvenience of sudden page turns due to repeats, codas, and so on. PF would like the score display to be modified to show a purely linear, monotonically left-to-right representation of music by repeating measures of music in the original encoding of the score.
3.4.13. RLP13: Performer wants to edit and annotate score during rehearsal for self, or for other performersPF1 is the first French Horn player in an orchestra section. PF1 wants to edit and annotate PF’s part as well as those of the other players PF2, PF3 in the section to add specific breath marks, phrasing, dynamics and articulation. Over the course of several rehearsals, these annotations are developed, and persist from session to a session even as PF and PF’s colleagues delete or change them. It is desirable to PF2 and PF3 that PF1’s additions be visually distinguishable from those in the original score.
3.4.14. RLP14: Performer wants to apply cuts, then share the modifications with fellow performersPF1 as a conductor adapts a score for a particular performance by a particular set of musicians. It’s common in choral and opera performances to cut entire sections. PF1 discovers incorrect notes (missing accidental or wrong duration) and fixes them. PF1 asks one voice section to supplement another (e.g. Alto 2’s or Bass 1’s to sing parts of the Tenor part). The Performer making these adaptations wants to share them with the other Performers in this group for this performance, so they don’t have to dictate the changes verbally and have each other Performer make the changes individually (and perhaps wrongly). The changes can be distributed either as a modified complete score or as a set of modifications to an original score.
3.4.15. RLP15: Performer wants to take advantage of annotations made by other musicians or conductor/section leaderPF1 as a conductor wants to collect individual annotations made by PF1 and also colleagues PF2, PF3 etc. on their parts in the course of rehearsing an ensemble score. PF1 wants to merge all of these annotations back into a single score that represents all of this individual work so it can be made available to others in the future.
3.4.16. RLP16: Performer wants to take advantage of adaptations and annotations by other Performers at other timesPF1 adapts a score. For example they may correct errors in a particular edition. They may apply typical cuts taken in performing an opera. PF1 posts those adaptations on a web site for others to use freely. PF2 downloads those changes an applies it to their own digital score, which may be based on a different edition with different page breaks. PF3 appreciates one of these sets of changes, but wants to publish a different, derived set of changes. They "fork" PF1’s score and start an different set of further changes, also posted on a web site for others to use freely. (Software developers, think "GitHub for musicians".)
3.4.17. RLP17: Performer wants to select from a set of alternative readings of a work where available to display a single score, suppressing display of the other readingsPF has purchased a work published by PU, where PU has incorporated alternative readings as ossias in the digital encoding of the score. PF would like to choose the reading(s) to be used and be able to view a score that incorporates the chosen readings while suppressing the others.
A variant use case might have the alternative readings supplied by other performers or parties besides PU.
3.4.18. RLP18: Performer wants to insert, change or delete a cue in their partPF plays percussion in an orchestra, and is playing a complex piece with too few cues in the percussion part. PF wants to be able to pull musical excerpts information from other parts at a given point in the score and to include this information as a cue notation. PF does not want the cue to always be a verbatim reproduction of the original part from which it is derived.
3.4.19. RLP19: Performer wants to compare two different score editions of the same work, to confirm they are the same, or see differencesPF1 receives a score from PU and an adapted score from a fellow performer PF2. PF1 runs comparison software on the two documents. Comparison can include or ignore differences in layout (focussing only on unrolled note sequence), in key signature (focussing only on pitches no matter how notated), in per-staff and score-wide annotations, in performance annotations, etc. In any case, implementation details like differences in insignificant whitespace in the music notation, are ignored. Comparison software returns a concise statement that the documents are identical, or a concise statement of what are the differences (show the content of 2 measures added to all parts after a certain measure, or the contents of a new staff throughout the piece in one document but not the other).
Support: See §3.1.5 MC4: Editor wants to compare two versions of digital sheet music to confirm they are the same, or see differences
3.4.20. RLP20: Publisher wants to permit some interactivity in a performance app, but maintain completely faithful visual appearance from paperPU is a publisher whose brand equity is associated with engraving quality. PU does not want any performance app to alter this formatting aside from scaling pages to the current device size.
Support: styling, noting the ability for media queries or different apps to change styling to suit context.
3.4.21. RLP21: Publisher wants to permit some adaptations and derived works, but make it evident when changes alter what the Publisher considers significantPU is another publisher whose brand equity is associated with editorial accuracy and engraving quality, but does not want to prevent derived works. They want to supply a signature of some kind, which will make it evident if any of the layers of content which Publisher values are modified. For instance, a Publisher may consider engraving changes related to screen size and orientation acceptable, but want to make changes to note values and pitches evident.
3.4.22. RLP22: Performer wants to try out various versions of a mobile score so as to decide what to play.C has written a score that is graphically mobile. The P has to configure the score in order to decide what to play. For example: Stockhausen’s Refrain.
3.5. Academic Research / Libraries
3.5.1. ARL1: Musicologist wants to produce a digital edition of a historical work, tracing the evolution of the work and including alternate readings.M is working on an edition of an early musical work with multiple sources that conflict. M wants to produce an encoding that includes these alternate readings along with textual commentary. Sometimes M favors a particular reading as primary and others as unlikely; in other cases, M feels that readings are of equal relevance to understanding the work and doesn’t want to favor one or the other.
3.5.2. ARL2: Musicologist wants to search for a particular theme or motive in a corpus of musical worksAs part of M’s research, M is interested in tracing the usage of a secular medieval folk tune motif across various early choral works. M wishes to search an online corpus of digital encodings of these works to discover instances of this motif, and pull up search results that render relevant fragments from these works that display the motif with a small amount of surrounding context.
3.5.3. ARL3: Musicologist wants to search a corpus of machine readable works for occurrences of words, ornaments or other notational elements.As part of M’s research, M is interested in tracing the usage of a secular medieval folk tune lyric across various early choral works. M wishes to search an online corpus of digital encodings of these works to discover instances of these words, and pull up search results that render relevant fragments from these works that display the motif with a small amount of surrounding context.
3.5.4. ARL4: Musicologist wants to manage related works or fragments in a single document.M is encoding an incomplete and fragmentary work from which many portions are missing. Nonetheless the work has demonstrable unity and M wants to encode these in a single digital document, not in a set of separate disconnected documents. M would like each fragment to be identified and separate within the overall document.
Support: compound MNX documents
3.5.5. ARL5: Musicologist, within one document, wants to reference a specific element, range or position in same or a different document using some kind of anchor.M is working on a critical digital edition of the Cantigas de Santa Maria (13th c.) drawing on four primary manuscript sources each of which may incorporate hundreds of individual cantigas or songs. M desires to create an encoding of all four manuscripts, each of which is itself a collection of these songs. The same song can be present in multiple manuscripts within the overall collection. Where this occurs, M desires to encode links or pointers between different instances of the same song, and sometimes between parallel passages that differ in various details.
3.5.6. ARL6: Musicologist wants to distinguish editorial inferences from assured, explicit information in a manuscript.M is preparing an encoding of a damaged manuscript. M can readily infer most of the missing or damaged notation from context, but as a matter of sound scholarship M wishes to identify in the encoding which elements M has supplied, and which are actually an encoding of the physical contents of the manuscript.
3.5.7. ARL7: Musicologist wants to represent original notation in a manuscript along with its conventional understanding in the musical period in which it was notated (e.g. musica ficta).M is encoding an Elizabethan keyboard work that incorporates ornamental symbols, and which omits "understood" accidentals that would be explicitly notated in the modern notational idiom. M wants to supply additional information for modern readers. In the case of ornaments, M would like to include fully notated rhythms and pitches representing how players of the time would have performed them, and distinguish these as editorial interpretation. Multiple such interpretations may be possible and worth including as alternates. In the case of missing accidentals, M wants to add accidentals as musica ficta (which are by definition editorial in nature).
Support: interpretation (partial)
3.5.8. ARL8: Musicologist wants to attach the notion of uncertainty to readings.M is transcribing some medieval songs into conventional Western notation. The mensuration of these songs is obscure and open to a great deal of interpretation. M desires to indicate which portions of the transcription are assured, and which are inferred. In some cases the pitch may be known and the rhythm uncertain; in other cases, the reverse.
3.5.9. ARL9: Musicologist wants to cross-reference readings or interpretations with their source material, in this document or elsewhere.M is encoding a work from a manuscript in mensural notation. M is producing two encoded documents. Document A represents the original source material as it appeared in the manuscript as mensural notation elements, including many errors, alternative duplicated passages, and visual formatting details. Document B represents M’s cleaned-up semantic reduction of Document A into modern-day common notation. M wants to carefully cross-reference each part of document B to the corresponding material in document A.
As a similar use case, consider the case where A and B are two different sections of a single master document.
3.5.10. ARL10: Musicologist wants to import a document into an application that is not able to work with alternate readings, uncertainty, etc.M wants to take a document that encodes much of the above material regarding readings and uncertainty and make use of it within an notation application that only deals with specific notation that lacks these elements.
3.6.1. ED1: Student wants to complete a music theory assignmentStudent S is harmonizing a 4-part chorale given a figured bass as a starting point. S will submit a digital encoding of the completed assignment to teacher T.
3.6.2. ED2: Instructor wants to assess completed music theory assignment and share markings/annotations with studentT is examining a submitted 4-part chorale harmonization by S. T makes use of some initial automated tools that identify and annotate typical voice-leading mistakes in S’s work. T also looks at a set of answer-key parts that are referenced by the assignment, but which were not viewable by the student, and makes further annotations of problems that were not caught by the automated tools. All of these annotations become part of the digital document that is then returned to the student for further work on the assignment.
3.6.3. ED3: Student wants to learn a song by playing and listening and selecting voices and instruments from the score, in order to hear his/her own voice.S wants to change tempo and tune (transpose) and to play a difficult part in a loop. S at times wants the piece to be played with a specific subset of parts, emphasizing some with increased volume. At other times S wants to hear the other instruments and deselect his/her own voice.
Support: MNX and interpretation
3.6.4. ED4: Student wants to learn an early Asian music notation (that reads top to bottom on the page)S has a score that contains at least one temporal rendering. While the score is performing, a cursor moves synchronously over the page to show which symbol is being performed.
3.6.5. ED5: Instructor wants to use an interactive Schenker analysis diagram.Clicking on buttons in the diagram could either perform the current level or move to a different one.
3.7. AccessibilityNote: we should consult current users of accessible music notation before continuing to the next step.
3.7.1. AC1: Users with disabilities wish to interact with a score via accessible input/output modalities.
3.7.2. AC2: Low-vision users wish to view an arbitrary score in Braille notation.
3.8. Development using Web, Epub and App Technologies
3.8.1. DEV1: Publishers wish to embed digital renderings of music within online hypertext documentsP is creating an online music theory curriculum incorporating numerous examples, to be consumed by students S studying traditional Western harmony. This curriculum is available via a web site. Pages on the site incorporate a mix of text, images, audio, video. Crucially, the curriculum pages also include examples by embedding short interactive musical scores that are viewable, printable and playable, in whole or in part. P must make use of standard Web technology to incorporate the musical materials, so that students do not have to install or launch special-purpose notation software.
3.8.2. DEV2: Publishers wish to embed digital renderings of music in offline electronically published worksP is creating an e-book of a music theory curriculum (see prior use case). Although the e-book can be viewed offline, it is constructed using standard Web technologies: HTML, CSS, JS and so on. It also incorporates text, images, audio, video and of course interactive musical scores. P doesn’t want to create completely separate versions of online and offline content, so it makes sense for them to create the content once using Web technology, and deliver it either in an online browser or in an offline reader.
3.8.3. DEV3: Developers want to render entire scores dynamically in applications for specific purposesD is developing an application that generates randomized sight-reading examples at specific skill levels by creating music markup programmatically. A performer PF consumes these examples in a practice session. D would prefer to generate these examples within their application as an in-memory notational document that employs a standard digital encoding, and render them based on this data, without ever surfacing the data as a physical file.
3.8.4. DEV4: Developers want to render specific portions of scores dynamically in response to user actionsD is building an application that will generate human-readable accompaniments to a melody given a harmonic structure, to be used to assist a composer C or arranger A in filling in musical textures quickly. The melody and harmony symbols can be imported from an external score. The user can add new, empty parts and select specific portions of these parts to be filled in with an algorithmically generated accompaniment. The resulting score is a hybrid of imported music and internally generated music, and can be viewed within the application or ultimately exported in a standard encoding. D wishes to generate these portions of the score using a standard digital encoding based on in-memory objects, and render them based on this data.
3.8.5. DEV5: Developers want to display music and respond to gestures that indicate some element of that musicD is building a music theory quiz in which questions are displayed along with an interactive musical excerpt. A student S must indicate one of several elements in a score as a response to the question. The music can be displayed and reflowed in a device-dependent fashion, so it will not suffice to look for gestures that occur in some specific region of the screen or page.
Support: MNX with potential future DOM Event support
3.8.6. DEV6: User needs to see some element, range or position in notated music highlighted with an app-dependent meaningD is building an music appreciation application for students S that includes various audio and notation excerpts. As the music plays, various relevant portions in the notation are highlighted. Links within accompanying text also serve to highlight the musical passages or concepts within the score to which they refer.
Support: MNX and styling
4. Cases Requiring Clarification
- Editor wants to discard performer annotations from digital sheet music
ED receives a digital music document which contains performance annotation.
- Composer wants to share sequencer project with orchestrator to ultimately produce notated music for human performers
C creates a sequencer project that captures C’s music in purely performative/gestural terms. For live performance, C desires to communicate some or all of this music to performers that will produce the corresponding parts on physical instruments by reading conventional music notation. It may be desirable to preserve this information wholly or in part across a conversion process that yields a notational model, but we might also consider approaches that link or correlate gestural documents with visual/semantic ones instead of trying to blend this information. (How is digital encoding of notation involved in this story? At what point does the orchestrator ever need to work with a digital encoding of the notation, as opposed to an encoding of the sequencer data?)
- Composer wants to generate sheet music in real-time for live performance
- Musicologist wants to prepare a modern-notation edition of a work using early notation, starting from machine-readable documents produced for academic research purposes.