Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
A large percentage of the data published on the Web is tabular data, commonly published as comma separated values (CSV) files. The CSV on the Web Working Group aim to specify technologies that provide greater interoperability for data dependent applications on the Web when working with tabular datasets comprising single or multiple files using CSV, or similar, format.
This document lists the use cases compiled by the Working Group that are considered representative of how tabular data is commonly used within data dependent applications. The use cases observe existing common practice undertaken when working with tabular data, often illustrating shortcomings or limitations of existing formats or technologies. This document also provides a set of requirements derived from these use cases that have been used to guide the specification design.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This is a draft document which may be merged into another document or eventually make its way into being a standalone Working Draft.
This document was published by the CSV on the Web Working Group as a First Public Working Draft. If you wish to make comments regarding this document, please send them to public-csv-wg@w3.org (subscribe, archives). All comments are welcome.
Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
A large percentage of the data published on the Web is tabular data, commonly published as comma separated values (CSV) files. CSV files may be of a significant size but they can be generated and manipulated easily, and there is a significant body of software available to handle them. Indeed, popular spreadsheet applications (Microsoft Excel, iWork’s Number, or OpenOffice.org) as well as numerous other applications can produce and consume these files. However, although these tools make conversion to CSV easy, it is resisted by some publishers because CSV is a much less rich format that can't express important detail that the publishers want to express, such as annotations, the meaning of identifier codes etc.
Existing formats for tabular data are format-oriented and hard to process (e.g. Excel); un-extensible (e.g. CSV/TSV); or they assume the use of particular technologies (e.g. SQL dumps). None of these formats allow developers to pull in multiple data sets, manipulate, visualize and combine them in flexible ways. Other information relevant to these datasets, such as access rights and provenance, is not easy to find. CSV is a very useful and simple format, but to unlock the data and make it portable to environments other than the one in which it was created, there needs to be a means of encoding and associating relevant metadata.
To address these issues, the CSV on the Web Working Group seeks to provide:
In order to determine the scope of and elicit the requirements for this extended CSV format (CSV+) a set of use cases have been compiled. Each use case provides a narrative describing how a representative user works with tabular data to achieve their goal, supported, where possible, with example datasets. The use cases observe existing common practice undertaken when working with tabular data, often illustrating shortcomings or limitations of existing formats or technologies. It is anticipated that the additional metadata provided within the CSV+ format, when coupled with metadata-aware tools, will simplify how users work with tabular data. As a result, the use cases seek to identify where user effort may be reduced.
A set of requirements, used to guide the development of the CSV+ specification, have been derived from the compiled use cases.
...
(Contributed by Adam Retter)
The laws of England and Wales place obligations upon departments and The National Archives for the collection, disposal and preservation of records. Government departments are obliged within the Public Records Act 1958 sections 3, 4 and 5 to select, transfer, preserve and make available those records that have been defined as public records. These obligations apply to records in all formats and media, including paper and digital records. Details concerning the selection and transfer of records can be found here.
Departments transferring records to TNA must catalogue or list the selected records according to The National Archives' defined cataloguing principles and standards. Cataloguing is the process of writing a description, or Transcriptions of Records for the records being transferred. Once each Transcription of Records is added to the Records Catalogue, records can be subsequently discovered and accessed using the supplied descriptions and titles.
TNA specifies what information should be provided within a Transcriptions of Records and how that information should be formatted. A number of formats and syntaxes are supported, including RDF. However, the predominant format used for the exchange of Transcriptions of Records is CSV as the government departments providing the Records lack either the technology or resources to provide metadata in the XML and RDF formats preferred by the TNA.
A CSV-encoded Transcriptions of Records typically describes a set of Records, often organised within a hierarchy. As a result, it is necessary to describe the interrelationships between Records within a single CSV file.
Each row within a CSV file relates to a particular Record and is allocated a unique identifier. This unique identifier behaves as a primary key for the Record within the scope of the CSV file and is used when referencing that Record from within other Record transcriptions.
Requires: PrimaryKey and ForeignKeyReferences
Upon receipt by TNA, each of the Transcriptions of Records is validated against the (set of) centrally published data definition(s); it is essential that received CSV metadata comply with these specifications to ensure efficient and error free ingest into the Records Catalogue.
The validation applied is dependent the type of entity described in each row. Entity type
is specified in a specific column (e.g. type
).
The data definition file, or CSV Schema, used by the CSV Validation Tool effectively forms the basis of a formal contract between TNA and supplying organisations. For more information on the CSV Validation Tool and CSV Schema developed by TNA please refer to the online documentation.
The CSV Validation Tool is written in Scala version 2.10.
Requires: ExternalDataDefinitionResource WellFormedCsvCheck and CsvValidation
Following validation, the CSV-encoded Transcriptions of Records are transformed into RDF for insertion into the triple store that underpins the Records Catalogue. The CSV is initially transformed into an interim XML format using XSLT and then processed further using a mix of XSLT, Java and Scala to create RDF/XML. The CSV files do not include all the information required to undertake the transformation, e.g. defining which RDF properties are to be used when creating triples for the data value in each cell. As a result, bespoke software has been created by TNA to supply the necessary additional information during the CSV to RDF transformation process. The availability of generic mechanisms to transform CSV to RDF would reduce the burden of effort within TNA when working with CSV files.
Requires: SyntacticTypeDefinition, SemanticTypeDefinition and CsvToRdfTransformation
Inclusion of CSV Schema examples (e.g. Data Definition Resource DDR) is anticipated.
(Contributed by Jeni Tennison)
The Office for National Statistics (ONS) is the UK’s largest independent producer of official statistics and is the recognised national statistical institute for the UK. It is responsible for collecting and publishing statistics related to the economy, population and society at national, regional and local levels.
Sets of statistics are typically grouped together into datasets comprising of collections of related tabular data. Within their underlying information systems, ONS maintains a clear separation between the statistical data itself and the metadata required for interpretation. ONS classify the metadata into two categories:
These datasets are published on-line in both CSV format and as Microsoft Excel Workbooks that have been manually assembled from the underlying data.
For example, refer to dataset QS601EW Economic activity, derived from the 2011 Census, is available as a precompiled Microsoft Excel Workbook for several sets of administrative geographies, e.g. 2011 Census: QS601EW Economic activity, local authorities in England and Wales, and in CSV form via the ONS Data Explorer.
The ONS Data Explorer presents the user with a list of available datasets. A user may choose to browse through the entire list or filter that list by topic. To enable the user to determine whether or not a dataset meets their need, summary information is available for each dataset.
QS601EW Economic activity provides the following summary information:
Requires: AnnotationAndSupplementaryInfo
Once the required dataset has been selected, the user is prompted to choose how they would like the statistical data to be aggregated. In the case of QS601EW Economic activity, the user is required to choose between the two mutually exclusive geography types: 2011 Administrative Hierarchy and 2011 Westminster Parliamentary Constituency Hierarchy. Effectively, the QS601EW Economic activity dataset is partitioned into two separate tables for publication.
Requires: GroupingOfMultipleTables
The user is also provided with an option to sub-select only the elements of the dataset that they deem pertinent for their needs. In the case of QS601EW Economic activity the user may select data from upto 200 geographic areas within the dataset to create a data subset that meets their needs. The data subset may be viewed on-line (presented as an HTML table) or downloaded in CSV or Microsoft Excel formats.
Requires: CsvAsSubsetOfLargerDataset
An example extract of data for England and Wales in CSV form is provided below. The data subset is provided as a compressed file containing both a csv formatted data file and a complementary html file containing the reference metadata. White space has been added for clarity. File = CSV_QS601EW2011WARDH_151277.zip
"QS601EW" "Economic activity" "19/10/13" , , "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count", "Count" , , "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person", "Person" , , "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)", "Economic activity (T016A)" "Geographic ID","Geographic Area","Total: All categories: Economic activity","Total: Economically active: Total","Economically active: Employee: Part-time","Economically active: Employee: Full-time","Economically active: Self-employed with employees: Part-time","Economically active: Self-employed with employees: Full-time","Economically active: Self-employed without employees: Part-time","Economically active: Self-employed without employees: Full-time","Economically active: Unemployed","Economically active: Full-time student","Total: Economically inactive: Total","Economically inactive: Retired","Economically inactive: Student (including full-time students)","Economically inactive: Looking after home or family","Economically inactive: Long-term sick or disabled","Economically inactive: Other" "E92000001", "England", "38881374", "27183134", "5333268", "15016564", "148074", "715271", "990573", "1939714", "1702847", "1336823", "11698240", "5320691", "2255831", "1695134", "1574134", "852450" "W92000004", "Wales", "2245166", "1476735", "313022", "799348", "7564", "42107", "43250", "101108", "96689", "73647", "768431", "361501", "133880", "86396", "140760", "45894"
Key characteristics of the CSV file are:
Requires: MultipleHeadingRows, AnnotationAndSupplementaryInfo
Do we want to be able to assert within the CSV+ metadata that the "data" exists at a particular region within the CSV? When talking about multiple tables within a single CSV file, AndyS stated:
"Maybe put the location of the data table within a single CSV file into the associated metadata: a package description for a single file."
Correct interpretation of the statistics requires additional qualification or awareness of context. To achieve this the complementary html file includes supplementary information and annotations pertinent to the data published in the accompanying csv file. Annotation or references may be applied to:
Requires: AnnotationAndSupplementaryInfo
Furthermore, these statistical data sets make frequent use of predefined category codes and geographic regions. Dataset QS601EW Economic activity includes two examples:
T016A
; identifying the statistical measure type - in this case,
whether a person aged 16 or over was in work or looking for work in the week before the censusAt present there is no standardised mechanism to associate the catagory codes, provided as plain text, with their authoritative definitions.
Requires: AssociationOfCodeValuesWithExternalDefinitions
Finally, reuse of the statistical data is also inhibited by a lack of explicit definition of the meaning of column headings.
Requires: SemanticTypeDefinition
(Contributed by Jeremy Tandy)
Climate change and global warming have become one of the most pressing environmental concerns in society today. Crucial to predicting future change is an understanding of how the world’s historical climate, with long duration instrumental records of climate being central to that goal. Whilst there is an abundance of data recording the climate at locations the world over, the scrutiny under which climate science is put means that much of this data remains unused leading to a paucity of data in some regions with which to verify our understanding of climate change.
The International Surface Temperature Initiative seeks to create a consolidated global land surface temperatures databank as an open and freely available resource to climate scientists.
To achieve this goal, climate datasets, known as “decks”, are gathered from participating organisations and merged into a combined dataset using a scientifically peer reviewed method which assesses the data records for inclusion against a variety of criteria.
Given the need for openness and transparency in creating the databank, it is essential that the provenance of the source data is clear. Original source data, particularly for records captured prior to the mid-twentieth century, may be in hard-copy form. In order to incorporate the widest possible scope of source data, the International Surface Temperature Initiative is supported by data rescue activities to digitise hard copy records.
The data is, where possible, published in the following four stages:
The Stage 1 data is typically provided in tabular form - the most common variant is
white-space delimited ASCII files. Each data deck comprises multiple files which are
packaged as a compressed tar ball (.tar.gz
). Included within the compressed
tar ball package, and provided alongside, is a read-me file providing unstructured
supplementary information. Summary information is often embedded at the top of each
file.
For example, see the Ugandan Stage 1 data deck and associated readme file.
The Ugandan Stage 1 data deck appears to be comprised of two discrete datasets, each
partitioned into a sub-directory within the tar ball: uganda-raw
and
uganda-bestguess
. Each sub-directory includes a Microsoft Word document
providing supplementary information about the provenance of the dataset; of particular
note is that uganda-raw
is collated from 9 source datasets whilst
uganda-bestguess
provides what is considered by the data publisher to be
the best set of values with duplicate values discarded.
Requires: AnnotationAndSupplementaryInfo
Dataset uganda-raw
is split into 96 discrete files, each providing maximum,
minimum or mean monthly air temperature for one of the 32 weather observation stations
(sites) included in the data set. Similarly, dataset uganda-bestguess
is
partitioned into discrete files; this case just 3 files each of which provide maximum,
minimum or mean monthly air temperature data for all sites. The mapping from data file to
data sub-set is described in the Microsoft Word document.
Requires: CsvAsSubsetOfLargerDataset
A snippet of the data indicating maximum monthly temperature for Entebbe, Uganda, from
uganda-raw
is provided below. File = 637050_ENTEBBE_tmx.txt
637050 ENTEBBE 5 ENTEBBE BEA 0.05 32.45 3761F ENTEBBE GHCNv3G 0.05 32.45 1155M ENTEBBE ColArchive 0.05 32.45 1155M ENTEBBE GSOD 0.05 32.45 1155M ENTEBBE NCARds512 0.05 32.755 1155M Tmax {snip} 1935.04 27.83 27.80 27.80 -999.00 -999.00 1935.12 25.72 25.70 25.70 -999.00 -999.00 1935.21 26.44 26.40 26.40 -999.00 -999.00 1935.29 25.72 25.70 25.70 -999.00 -999.00 1935.37 24.61 24.60 24.60 -999.00 -999.00 1935.46 24.33 24.30 24.30 -999.00 -999.00 1935.54 24.89 24.90 24.90 -999.00 -999.00 {snip}
The key characteristics are:
BEA
(British East Africa),
GHCNv3G
, ColArchive
, GSOD
and
NCARds512
A snippet of the data indicating maximum monthly temperature for all stations in Uganda
from uganda-bestguess
is provided below (truncated to 9 columns). File = ug_tmx_jrc_bg_v1.0.txt
ARUA BOMBO BUKALASA BUTIABA DWOLI ENTEBBE AIR FT PORTAL GONDOKORO […] {snip} 1935.04 -99.00 -99.00 -99.00 -99.00 -99.00 27.83 -99.00 -99.00 […] 1935.12 -99.00 -99.00 -99.00 -99.00 -99.00 25.72 -99.00 -99.00 […] 1935.21 -99.00 -99.00 -99.00 -99.00 -99.00 26.44 -99.00 -99.00 […] 1935.29 -99.00 -99.00 -99.00 -99.00 -99.00 25.72 -99.00 -99.00 […] 1935.37 -99.00 -99.00 -99.00 -99.00 -99.00 24.61 -99.00 -99.00 […] 1935.46 -99.00 -99.00 -99.00 -99.00 -99.00 24.33 -99.00 -99.00 […] 1935.54 -99.00 -99.00 -99.00 -99.00 -99.00 24.89 -99.00 -99.00 […] {snip}
Many of the characteristics concerning the “raw” file are exhibited here too. Additionally, we see that:
U+0009
)tmx
) with supplementary information in the accompanying
Microsoft Word document to determine the semanticsAt present, the global surface temperature databank comprises 25 Stage 1 data decks for
monthly temperature observations. These are provided by numerous organisations in
heterogeneous forms. In order to merge these data decks into a single combined dataset,
each data deck has to be converted into a standard form. Columns consist of: station
name
, latitude
, longitude
, altitude
,
date
, maximum monthly temperature
, minimum monthly
temperature
, mean monthly temperature
plus additional provenance
information.
An example Stage 2 data file is given for Entebbe, Uganda, below. File = uganda_000000000005_monthly_stage2
{snip} ENTEBBE 0.0500 32.4500 1146.35 193501XX 2783 1711 2247 301/109/101/104/999/999/999/000/000/000/102 ENTEBBE 0.0500 32.4500 1146.35 193502XX 2572 1772 2172 301/109/101/104/999/999/999/000/000/000/102 ENTEBBE 0.0500 32.4500 1146.35 193503XX 2644 1889 2267 301/109/101/104/999/999/999/000/000/000/102 ENTEBBE 0.0500 32.4500 1146.35 193504XX 2572 1817 2194 301/109/101/104/999/999/999/000/000/000/102 ENTEBBE 0.0500 32.4500 1146.35 193505XX 2461 1722 2092 301/109/101/104/999/999/999/000/000/000/102 ENTEBBE 0.0500 32.4500 1146.35 193506XX 2433 1706 2069 301/109/101/104/999/999/999/000/000/000/102 ENTEBBE 0.0500 32.4500 1146.35 193507XX 2489 1628 2058 301/109/101/104/999/999/999/000/000/000/102 {snip}
Because of the heterogeneity of the Stage 1 data decks, bespoke data processing programs were required for each data deck consuming valuable effort and resource in simple data pre-processing. If the semantics, structure and other supplementary metadata pertinent to the Stage 1 data decks had been machine readable, then this data homogenisation stage could have been avoided altogether. Data provenance is crucial to this initiative, therefore it would be beneficial to be able to associate the supplementary metadata without needing to edit the original data files.
Requires: LinksToExternallyManagedDefinitions, SyntacticTypeDefinition, SemanticTypeDefinition, MissingValueDefinition, NonStandardFieldDelimiter and ZeroEditAdditionOfSupplementaryMetadata
The data pre-processing tools created to parse each Stage 1 data deck into the standard Stage 2 format and the merge process to create the consolidated Stage 3 data set were written using the software most familiar to the participating scientists: Fortran 95. The merge software source code is available online. It is worth noting that this sector of the scientific community also commonly uses IDL and is gradually adopting Python as the default software language choice.
The resulting merged dataset is published in several formats – including tabular text. The GHCN-format merged dataset comprises of several files: merged data and withheld data (e.g. those data that did not meet the merge criteria) each with an associated “inventory” file.
A snippet of the inventory for merged data is provided below; each row describing one of the 31,427 sites in the dataset. File = merged.monthly.stage3.v1.0.0-beta4.inv
{snip} REC41011874 0.0500 32.4500 1155.0 ENTEBBE_AIRPO {snip}
The columns are: station identifier
, latitude
,
longitude
, altitude (m)
and station name
. The
data is fixed format rather than delimited.
Similarly, a snippet of the merged data itself is provided. Given that the original
.dat
file is a largely unmanageable 422.6 MB in size, a subset is provided.
File = merged.monthly.stage3.v1.0.0-beta4.snip
{snip} REC410118741935TAVG 2245 2170 2265 2195 2090 2070 2059 2080 2145 2190 2225 2165 REC410118741935TMAX 2780 2570 2640 2570 2460 2430 2490 2520 2620 2630 2660 2590 REC410118741935TMIN 1710 1770 1890 1820 1720 1710 1629 1640 1670 1750 1790 1740 {snip}
The columns are: station identifier
, year
, quantity
kind
and the quantity values for months January to December in that year. Again,
the data is fixed format rather than delimited.
Here we see the station identifier REC41011874
being used as a foreign key
to refer to the observing station details; in this case Entebbe Airport. Once again, there
is no metadata provided within the file to describe how to interpret each of the data
values.
Requires: ForeignKeyReferences
The resulting merged dataset provides time series of how the observed climate has changed over a long duration at approximately 32000 locations around the globe. Such instrumental climate records provide a basis for climate research. However, it is well known that these climate records are usually affected by inhomogeneities (artifical shifts) due to changes in the measurement conditions (e.g. relocation, modification or recalibration of the instrument etc.). As these artificial shifts often have the same magnitude as the climate signal, such as long-term variations, trends or cycles, a direct analysis of the raw time-series data can lead to wrong conclusions about climate change.
Statistical homogenisation procedures are used to detect and correct these artificial shifts. Once detected, the raw time-series data is annotated to indicate the presence of artifical shifts in the data, details of the homogenisation procedure undertaken and, where possible, the reasons for those shifts.
Requires: AnnotationAndSupplementaryInfo
Future iterations of the global land surface temperatures databank are aniticipated to include quality controlled (Stage 4) and homogenised (Stage 5) datasets derived from the merged dataset (Stage 3) outlined above.
(Contributed by Jeni Tennison)
In line with the G8 open data charter Principle 4: Releasing data for improved governance,the UK Government publishes information about public sector roles and salaries.
The collection of this information is managed by the Cabinet Office and subsequently published via the UK Government data portal at DATA.GOV.UK.
In order to ensure a consistent return from submitting departments and agencies, the Cabinet Office mandated that each response conform to a data definition schema. Each submission comprises a pair of CSV files - one for senior roles and another for junior roles.
Requires: GroupingOfMultipleTables, WellFormedCsvCheck and CsvValidation
Is there a reference to the Cabinet Office's data definition schema for "Public Roles and Salaries"?
The submission for senior roles from the Higher Education Funding Council for England (HEFCE) is provided below to illustrate. White space has been added for clarity. File = HEFCE_organogram_senior_data_31032011.csv
Post Unique Reference, Name,Grade, Job Title, Job/Team Function, Parent Department, Organisation, Unit, Contact Phone, Contact E-mail,Reports to Senior Post,Salary Cost of Reports (£),FTE,Actual Pay Floor (£),Actual Pay Ceiling (£),,Profession,Notes,Valid? 90115, Steve Egan,SCS1A,Deputy Chief Executive, Finance and Corporate Resources,Department for Business Innovation and Skills,Higher Education Funding Council for England, Finance and Corporate Resources, 0117 931 7408, s.egan@hefce.ac.uk, 90334, 5883433, 1, 120000, 124999,, Finance, , 1 90250, David Sweeney,SCS1A, Director,"Research, Innovation and Skills",Department for Business Innovation and Skills,Higher Education Funding Council for England,"Research, Innovation and Skills", 0117 931 7304, d.sweeeney@hefce.ac.uk, 90334, 1207171, 1, 110000, 114999,, Policy, , 1 90284, Heather Fry,SCS1A, Director, Education and Participation,Department for Business Innovation and Skills,Higher Education Funding Council for England, Education and Participation, 0117 931 7280, h.fry@hefce.ac.uk, 90334, 1645195, 1, 100000, 104999,, Policy, , 1 90334,Sir Alan Langlands, SCS4, Chief Executive, Chief Executive,Department for Business Innovation and Skills,Higher Education Funding Council for England, HEFCE,0117 931 7300/7341,a.langlands@hefce.ac.uk, xx, 0, 1, 230000, 234999,, Policy, , 1
Similarly, a snippet of the junior role submission from HEFCE is provided. Again, white space has been added for clarity. File = HEFCE_organogram_junior_data_31032011.csv
. Parent Department, Organisation, Unit,Reporting Senior Post,Grade,Payscale Minimum (£),Payscale Maximum (£),Generic Job Title,Number of Posts in FTE, Profession Department for Business Innovation and Skills,Higher Education Funding Council for England, Education and Participation, 90284, 4, 17426, 20002, Administrator, 2,Operational Delivery Department for Business Innovation and Skills,Higher Education Funding Council for England, Education and Participation, 90284, 5, 19546, 22478, Administrator, 1,Operational Delivery Department for Business Innovation and Skills,Higher Education Funding Council for England,Finance and Corporate Resources, 90115, 4, 17426, 20002, Administrator, 8.67,Operational Delivery Department for Business Innovation and Skills,Higher Education Funding Council for England,Finance and Corporate Resources, 90115, 5, 19546, 22478, Administrator, 0.5,Operational Delivery {snip}
Key characteristics of the CSV files are:
Within the senior role CSV the field Post Unique Reference
provides
a primary key for the entity described within a given row.
Requires: PrimaryKey
This primary key is referenced both from within the senior post dataset,
Reports to Senior Post
, and within the junior post dataset, Reporting
Senior Post
in order to determine the relationships within the organisational
structure.
Requires: ForeignKeyReferences
For the most senior role in a given organisation, the Reports to Senior Post
field is expressed as xx
denoting that this post does not report to anyone
within the organisation.
Requires: MissingValueDefinition
The public sector roles and salaries information is published at DATA.GOV.UK using an interactive "Organogram Viewer" widget implemented using javascript. The HEFCE data can be accessed here.
In order to create this visualisation, each pair of tabular datasets were transformed into RDF and uploaded into a triple store exposing a SPARQL end-point which the interactive widget then queries to acquire the necessary data. An example of the derived RDF is provided in file HEFCE_organogram_31032011.rdf.
The transformation from CSV to RDF required bespoke software, supplementing the content in the CSV files with additional information such as the RDF properties for each column. The need to create and maintain bespoke software incurs costs that may be avoided through use of a generic CSV-to-RDF transformation mechanism.
Requires: CsvToRdfTransformation
(Contributed by Andy Seaborne)
The Land Registry is the government department with responsibility to register the ownership of land and property within England and Wales. Once land or property is entered to the Land Register any ownership changes, mortgages or leases affecting that land or property are recorded.
Their Price paid data, dating from 1995 and consisting of more than 18.5 million records, tracks the residential property sales in England and Wales that are lodged for registration. This dataset is one of the most reliable sources of house price information in England and Wales.
Residential property transaction details are extracted from a data warehouse system
and collated into a tabular dataset for each month.
The current monthly dataset is available online in both .txt
and
.csv
formats. Snippets of data for January 2014 are provided below. White space
has been added for clarity.
{C6428808-DC2A-4CE7-8576-0000303EF81B},137000,2013-12-13 00:00, "B67 5HE","T","N","F","130","", "WIGORN ROAD", "", "SMETHWICK", "SANDWELL", "WEST MIDLANDS","A" {16748E59-A596-48A0-B034-00007533B0C1}, 99950,2014-01-03 00:00, "PE3 8QR","T","N","F", "11","", "RISBY","BRETTON","PETERBOROUGH","CITY OF PETERBOROUGH","CITY OF PETERBOROUGH","A" {F10C5B50-92DD-4A69-B7F1-0000C3899733},355000,2013-12-19 00:00,"BH24 1SW","D","N","F", "55","","NORTH POULNER ROAD", "", "RINGWOOD", "NEW FOREST", "HAMPSHIRE","A" {snip}
pp-monthly-update-new-version.csv
"{C6428808-DC2A-4CE7-8576-0000303EF81B}","137000","2013-12-13 00:00", "B67 5HE","T","N","F","130","", "WIGORN ROAD", "", "SMETHWICK", "SANDWELL", "WEST MIDLANDS","A" "{16748E59-A596-48A0-B034-00007533B0C1}", "99950","2014-01-03 00:00", "PE3 8QR","T","N","F", "11","", "RISBY","BRETTON","PETERBOROUGH","CITY OF PETERBOROUGH","CITY OF PETERBOROUGH","A" "{F10C5B50-92DD-4A69-B7F1-0000C3899733}","355000","2013-12-19 00:00","BH24 1SW","D","N","F", "55","","NORTH POULNER ROAD", "", "RINGWOOD", "NEW FOREST", "HAMPSHIRE","A" {snip}
There seems to be little difference between the two formats with the exception that all
fields within the .csv
file are escaped with a pair of double quotes (""
).
The header row is absent. Information regarding the meaning of each column and the abbreviations used within the dataset are provided in a complementary FAQ document. The column headings are provided below along with some supplemental detail:
Transaction unique identifier
Price
- sale price stated on the Transfer deedDate of Transfer
- date when the sale was completed, as stated on the Transfer deedPostcode
Property Type
- D
(detatched), S
(semi-detatched),
T
(terraced), F
(flats/maisonettes)Old/New
- Y
(newly built property) and
N
(established residential building)Duration
- relates to tenure; F
(freehold) and L
(leasehold)PAON
- Primary Addressable Object NameSAON
- Secondary Addressable Object NameStreet
Locality
Town/City
Local Authority
County
Record status
- indicates status of the transaction; A
(addition of a new transaction), C
(correction of an existing transaction)
and D
(deleted transaction)Requires: AnnotationAndSupplementaryInfo
Each row, or record, within the tabular dataset describes a property transaction. The
Transaction unique identifier
column provides the primary key for each record.
Requires: PrimaryKey
Each transaction record makes use of predefined category codes as outlined above; e.g.
Duration
may be F
(freehold) or L
(leasehold). Furthermore,
geographic descriptors are commonly used. Whilst there is no attempt to
link these descriptors to specific geographic identifiers, such a linkage is likely
to provide additional utility when aggregating transaction data by location or region for further
analysis. At present there is no standardised mechanism to associate the catagory codes,
provided as plain text, or geographic identifiers with their authoritative definitions.
Requires: AssociationOfCodeValuesWithExternalDefinitions
The collated monthly transaction dataset is used as the basis for updating the Land Registry's information systems; in this case the data is persisted as RDF triples within a triple store. A SPARQL end-point and accompanying data definitions are provided by the Land Registry allowing users to query the content of the triple store.
In order to update the triple store, the monthly transaction dataset is converted into RDF. The
value of the Record status
field for a given row informs the update process: add, update or
delete. Bespoke software has been created by the Land Registry to transformation from CSV to RDF.
The transformation requires supplementary information not present in the CSV, such as the RDF
properties for each column specified in the
data definitions. The need to create and maintain bespoke software incurs costs that may
be avoided through use of a generic CSV-to-RDF transformation mechanism.
Requires: CsvToRdfTransformation
The monthly transaction dataset contains in the order of 100,000 records; any transformation will need to scale accordingly.
In parallel to providing access via the SPARQL end-point, the Land Registry also provides aggregated sets of transaction data. Data is available as a single file containing all transactions since 1995, or partitioned by year. Given that the complete dataset is approaching 3GB in size, the annual partitions provide a far more manageable method to download the property transaction data. However, each annual partition is only a subset of the complete dataset. It is important to be able to both make assertions about the complete dataset (e.g. publication date, license etc.) and to be able to understand how an annual partition relates to the complete dataset and other partitions.
Requires: CsvAsSubsetOfLargerDataset
(Contributed by Alf Eaton)
When performing literature searches researchers need to retain a persisted collection of journal articles of interest in a local database compiled from on-line publication websites. In this use case a researcher wants to retain a local personal journal article publication database based on the search results from Public Library of Science PLOS One is a nonprofit open access scientific publishing project aimed at creating a library of open access journals and other scientific literature under an open content license.
In general this use case also illustrates the utility of CSV as a convenient exchange format for pushing tabular data between software components:
The PLOS website features a Solr index search engine which can return query results in JSON Live Search (JSON Screen dump) XML Live Search (XML Screen dump) or in a more concise format CSV Live Search (CSV Screen dump) :
id,publication_date,title_display,author 10.1371/journal.pone.0087584,2014-02-14T00:00:00Z,Healthcare Worker Perceived Barriers to Early Initiation of Antiretroviral and Tuberculosis Therapy among Tanzanian Inpatients,"Bahati M K Wajanga,Robert N Peck,Samuel Kalluvya,Daniel W Fitzgerald,Luke R Smart,Jennifer A Downs" 10.1371/journal.pone.0087286,2014-02-14T00:00:00Z,"Subditine, a New Monoterpenoid Indole Alkaloid from Bark of <i>Nauclea subdita</i> (Korth.) Steud. Induces Apoptosis in Human Prostate Cancer Cells","Sook Yee Liew,Chung Yeng Looi,Mohammadjavad Paydar,Foo Kit Cheah,Kok Hoong Leong,Won Fen Wong,Mohd Rais Mustafa,Marc Litaudon,Khalijah Awang" 10.1371/journal.pone.0087630,2014-02-14T00:00:00Z,Spatiotemporal Characterizations of Dengue Virus in Mainland China: Insights into the Whole Genome from 1978 to 2011,"Hao Zhang,Yanru Zhang,Rifat Hamoudi,Guiyun Yan,Xiaoguang Chen,Yuanping Zhou" 10.1371/journal.pone.0086587,2014-02-14T00:00:00Z,<i>optGpSampler</i>: An Improved Tool for Uniformly Sampling the Solution-Space of Genome-Scale Metabolic Networks,"Wout Megchelenbrink,Martijn Huynen,Elena Marchiori" 10.1371/journal.pone.0082694,2014-02-14T00:00:00Z,Prophylactic Antibiotics to Prevent Cellulitis of the Leg: Economic Analysis of the PATCH I & II Trials,"James M Mason,Kim S Thomas,Angela M Crook,Katharine A Foster,Joanne R Chalmers,Andrew J Nunn,Hywel C Williams"
To be useful to a user maintaining a PLOS One search results need to be returned in an organized and consistent tabular format. This includes:
Lastly because the research may use different search criteria the header row plays an important role later for the researcher wanting to combine multiple literature searches into their database. The researcher will use the header column names returned in the first row as a way to identify each column type.
Requires: WellFormedCsvCheck and CsvValidation
Search results returned in a tabular format can contain cell values that organized in data structures also known as micro formats. In example above the publication_date and authors list represent two micro formats that are represented in a recognizable pattern that can be parsed by software or by the human reader. In the case of the author column, microformats provide the advantage of being able to store a single author's name or multiple authors names separated by a comma delimiter. Because each author cell value is surrounded by quotes a parser can choose to ignore the data structure or address it.
Requires: CellValueMicroSyntax
Is this information in scope? How can it be incorporated better into the use case? The following additional observations were made while interacting with the search engine:
(Contributed by Davide Ceolin)
Several Web sources expose datasets about UK crime statistics. These datasets vary in format (e.g. maps vs. CSV files), timeliness, aggregation level, etc. Before being published on the Web, these data are processed to preserve the privacy of the people involved, but again the processing policy varies from source to source.
Every month, the UK Police Home Office publishes (via data.police.uk) CSV files that report crime counts, aggregated on geographical basis (per address or police neighbourhood) and on type basis. Before publishing, data are smoothed, that is, grouped in predefined areas and assigned to the mid point of each area. Each area has to contain a minimum number of physical addresses. The goal of this procedure is to prevent the reconstruction of the identity of the people involved in the crimes.
Over time, the policies adopted for preprocessing these data have changed, but data previously published have not been recomputed. Therefore, datasets about different months present relevant differences in terms of crime types reported and geographical aggregation (e.g. initially, each geographical area for aggregation had to include at least 12 physical addresses. Later, this limit was lowered to 8).
These policies introduce a controlled error in the data for privacy reasons, but these changes in the policies imply the fact that different datasets adhere differently to the real data, i.e. they present different reliability levels. Previous work provided two procedures for measuring and comparing the reliability of the datasets, but in order to automate and improve these procedures, it is crucial to understand the meaning of the columns, the relationships between columns, and how the data rows have been computed.
For instance, here is a snippet from a dataset about crime happened in Hampshire in April 2012:
Month Force Neighbourhood Burglary Robbery Vehicle crime Violent crime Anti-social behaviour Other crime {snip} 2011-04 Hampshire Constabulary 2LE11 2 0 1 6 14 6 2011-04 Hampshire Constabulary 2LE10 1 0 2 4 15 6 2011-04 Hampshire Constabulary 2LE12 3 0 0 4 25 21 {snip}
and that dataset reports 248 entries, while in October 2012, the crime types we can see are increased to 11:
Month Force Neighbourhood Burglary Robbery Vehicle crime Violent crime ASB CDA Shoplifting OT Drugs PDW OC 2012-10 Hampshire Constabulary 2LE11 1 0 1 2 8 0 0 1 1 0 1 2012-10 Hampshire Constabulary 1SY01 9 1 12 8 87 17 12 14 13 7 4 2012-10 Hampshire Constabulary 1SY02 11 0 11 20 144 39 2 12 9 8 5
This dataset reports 232 entries.
In order to properly handle the columns, it is crucial to understand the type of the data therein contained. Given the context, knowing this information would reveal an important part of the column meaning (e.g. to identify dates).
Requires: SyntacticTypeDefinition
Also, it is important to understand the precise semantics of each column. This is relevant for two reasons. First, to identify relations between columns (e.g. some crime types are siblings, while other are less semantically related). Second, to identify semantic relations between columns in heterogeneous datasets (e.g. a column in one dataset may correspond to the sum of two or more columns in others).
Requires: SemanticTypeDefinition
Lastly, datasets with different row numbers are the result of different smoothing procedures. Therefore, it would be important to trace and access their provenance, in order to facilitate their comparison.
Requires: AnnotationAndSupplementaryInfo
(Contributed by Alf Eaton, Davide Ceolin, Martine de Vos)
A paper published in Nature Immunology in December 2012 compared changes in expression of a range of genes in response to treatment with two different cytokines. The results were published in the paper as graphic figures, and the raw data was presented in the form of two supplementary spreadsheets, as Excel files.
Having at disposal both the paper and the results, a scientist may wish to reproduce the experiment, check if the results he obtains coincide with those published, and compare those results with others, provided by different studies about the same issues.
Because of the size of the datasets and of the complexity of the computations, it could be necessary to perform such analyses and comparisons by means of properly defined software, typically by means of an R, Python or Matlab script. Such software would require as input the data contained in the Excel file. However, it would be difficult to write a parser to extract the information, for the reasons described below.
To clarify the issues related to the spreadsheet parsing and analysis, we first present an example extrapolated from it.
Supplementary Table 2. Genes more potently regulated by IL-15 gene_name symbol RPKM Fold Change 4 hour 24 hour 4 hour ... Cont IL2_1nM IL2_500nM IL15_1nM IL15_500nM IL2_1nM IL2_500nM IL15_1nM IL15_500nM IL2_1nM ... {snip} NM_001033122 Cd69 15,67 46,63 216,01 30,71 445,58 9,21 77,32 4,56 77,21 ... NM_026618 Ccdc56 9,07 12,55 9,25 5,88 14,33 20,08 20,91 11,97 22,69 ... NM_008637 Nudt1 9,31 7,51 8,60 11,21 6,84 15,85 25,14 7,56 22,77 ... NM_008638 Mthfd2 58,67 33,99 245,87 44,66 167,87 55,62 204,50 24,52 176,51 ... NM_178185 Hist1h2ao 7,13 16,52 7,82 7,79 16,99 75,04 290,72 21,99 164,93 ... {snip}
As we can see from the example, the table contains several columns of data that are measurements of gene expression in cells after treatment with two concentrations of two cytokines, measured after two periods of time, presented as both actual values and fold change. This can be represented in a table, but needs 3 levels of headings and several merged cells. In fact, the first row is the title of the table, the second to fourth rows are the table headers, and the first two columns are also headers. It would be useful to be able to describe this in a way that a parser could understand. Cell borders are also used to communicate information, though this table would still be understandable without them.
Requires: MultipleHeadingRows HeadingColumns
The first column contains a GenBank identifier for each gene, with the column name "gene_name". The first column contains a GenBank identifier for each gene, with the column name "gene_name". The GenBank identifier provides a local identifier for each gene. This local identifier, e.g. “NM_008638”, can be converted to a fully qualified URI by adding a URI prefix, e.g. “http://www.ncbi.nlm.nih.gov/nuccore/NM_008638” allowing the gene to be uniquely and unambiguously identified.
The second column contains the standard symbol for each gene, labelled as "symbol". These appear to be HUGO gene nomenclature symbols, but as there's no mapping it's hard to be sure which namespace these symbols are from.
Requires: URIMapping
As this spreadsheet was published as supplemental data for a journal article, there is little description of what the columns represent, even as text. There is a column labelled as "Cont", which has no description anywhere, but is presumably the background level of expression for each gene.
Requires: SyntacticTypeDefinition
Requires: SemanticTypeDefinition
Half of the cells represent measurements, but the details of what those measurements are can only be found in the article text. The other half of the cells represent the change in expression over the background level. It is difficult to tell the difference without annotation that describes the relationship between the cells (or understanding of the nested headings). In this particular spreadsheet, only the values are published, and not the formulae that were used to calculate the derived values. The units of each cell are "expression levels relative to the expression level of a constant gene, Rpl7", described in the text of the methods section of the full article.
Requires: UnitMeasureDefinition
The heading rows contain details of the treatment that each cell received, e.g. "4 hour, IL2_1nM". It would be useful to be able to make this machine readable (i.e. to represent treatment with 1nM IL-2 for 4 hours).
All the details of the experiment (which cells were used, how they were treated, when they were measured) are described in the methods section of the article. To be able to compare data between multiple experiments, a parser would also need to be able to understand all these parameters that may have affected the outcome of the experiment.
Requires: AnnotationAndSupplementaryInfo
(Contributed by Mathew Thomas)
Chemical imaging experimental work makes use of CSV formats to record its measurements. In this use case two examples are shown to depict scans from a mass spectrometer and corresponding FTIR corrected files that are saved into a CSV format automatically.
Mass Spectrometric Imaging (MSI) allows the generation of 2D ion density maps that help visualize molecules present in sections of tissues and cells. The combination of spatial resolution and mass resolution results in very large and complex data sets. The following is generated using the software Decon Tools, a tool to de-isotope MS spectra and to detect features from MS data using isotopic signatures of expected compounds, available freely at omins.pnnl.gov. The raw files generated by the mass spec instrument are read in and the processed output files are saved as csv files for each line.
Fourier transform (FTIR) spectroscopy is a measurement technique whereby spectra are collected based on measurements of the coherence of a radiative source, using time-domain or space-domain measurements of the electromagnetic radiation or other type of radiation.
In general this use case also illustrates the utility of CSV as a means for scientists to collect and process their experimental results:
The key characteristics are:
Requires: WellFormedCsvCheck, CsvValidation PrimaryKey and UnitMeasureDefinition
Lastly, for Mass Spectrometry multiple CSV files need to be examined to view the sample image in its entirety.
Requires: CsvAsSubsetOfLargerDataset
Below are Mass Spectrometry instrument measurements (3 of 316 CSV rows) for a single line on a sample. It gives the mass-to-charge ranges, peak values, acquisition times and total ion current.
scan_num,scan_time,type,bpi,bpi_mz,tic,num_peaks,num_deisotoped,info 1,0,1,4.45E+07,576.27308,1.06E+09,132,0,FTMS + p NSI Full ms [100.00-2000.00] 2,0.075,1,1.26E+08,576.27306,2.32E+09,86,0,FTMS + p NSI Full ms [100.00-2000.00] 3,0.1475,1,9.53E+07,576.27328,1.66E+09,102,0,FTMS + p NSI Full ms [100.00-2000.00]
Below is a example FTIR data. The files from the instrument are baseline corrected, normalized and saved as csv files automatically. Column 1 represents the wavelength # or range and the represent different formations like bound eps (extracellular polymeric substance), lose eps, shewanella etc. Below are (5 of 3161 rows) is a example:
,wt beps,wt laeps,so16533 beps,so167333 laeps,so31 beps,so313375 lAPS,so3176345 bEPS,so313376 laEPS,so3193331 bEPS,so3191444 laeps,so3195553beps,so31933333 laeps 1999.82,-0.0681585,-0.04114415,-0.001671781,0.000589855,0.027188073,0.018877371,-0.066532177,-0.016899697,-0.077690018,0.001594551,-0.086573831,-0.08155035 1998.855,-0.0678255,-0.0409804,-0.001622611,0.000552989,0.027188073,0.01890847,-0.066132737,-0.016857071,-0.077346835,0.001733207,-0.086115107,-0.081042424 1997.89,-0.067603,-0.0410459,-0.001647196,0.000423958,0.027238845,0.018955119,-0.065904461,-0.016750515,-0.077101756,0.001733207,-0.085656382,-0.080590934 1996.925,-0.0673255,-0.04114415,-0.001647196,0.000258061,0.027289616,0.018970669,-0.065790412,-0.01664396,-0.076856677,0.001629215,-0.085281062,-0.080365189
(Contributed by Stasinos Konstantopoulos)
The OpenSpending and the Budgit platforms provide plenty of useful datasets providing figures of national budget and spending of several countries. A journalist willing to investigate about public spending fallacies can use these data as a basis for his research, and possibly compare them against different sources. Similarly, a politician that is interested in developing new policies for development can, for instance, combine these data with those from the World Bank to identify correlations and, possibly, dependencies to leverage.
Nevertheless, these uses of these datasets are possibly undermined by the following obstacles.
In order to be able to compare and combine these data with those provided by other sources like the World Bank, in an automatic manner, it would be necessary to explicit the currency of each column.
Requires: UnitMeasureDefinition
Requires: LinksToExternallyManagedDefinitions, AnnotationAndSupplementaryInfo
Requires: MissingValueDefinition
The datahub.io platform that collects both OpenSpending and Budgit data allows publishing data in Simple Data Format (SDF), RDF and other formats providing explicit semantics. Nevertheless, the datasets mentioned above present either implicit semantics and/or additional metadata files provided only as attachment.
(Contributed by Eric Stephan)
The City of Palo Alto, California Urban Forest Section is responsible for maintaining and tracking the cities public trees and urban forest. In a W3C Data on the Web Best Practices (DWBP) use case discussion with Jonathan Reichental City of Palo Alto CIO, he brought to the working groups attention a Tree Inventory maintained by the city in a spreadsheet form using Google Fusion. This use case represents use of tabular data to be representative of geophysical tree locations also provided in Google Map form where the user can point and click on trees to look up row information about the tree.
Example output from first three rows of CSV file.
GID,Private,Tree ID,Admin Area,Side of Street,On Street,From Street,To Street,Street_Name,Situs Number,Address Estimated,Lot Side,Serial Number,Tree Site,Species,Trim Cycle,Diameter at Breast Ht,Trunk Count,Height Code,Canopy Width,Trunk Condition,Structure Condition,Crown Condition,Pest Condition,Condition Calced,Condition Rating,Vigor,Cable Presence,Stake Presence,Grow Space,Utility Presence,Distance from Property,Inventory Date,Staff Name,Comments,Zip,City Name,Longitude,Latitude,Protected,Designated,Heritage,Appraised Value,Hardscape,Identifier,Location Feature ID,Install Date,Feature Name,KML,FusionMarkerIcon 1,True,29,,,ADDISON AV,EMERSON ST,RAMONA ST,ADDISON AV,203,,Front,,2,Celtis australis,Large Tree Routine Prune,11,1,25-30,15-30,,Good,5,,,Good,2,False,False,Planting Strip,,44,10/18/2010,BK,,,Palo Alto,-122.1565172,37.4409561,False,False,False,,None,40,13872,,"Tree: 29 site 2 at 203 ADDISON AV, on ADDISON AV 44 from pl","<Point><coordinates>-122.156485,37.440963</coordinates></Point>",small_green 2,True,30,,,EMERSON ST,CHANNING AV,ADDISON AV,ADDISON AV,203,,Left,,1,Liquidambar styraciflua,Large Tree Routine Prune,11,1,50-55,15-30,Good,Good,5,,,Good,2,False,False,Planting Strip,,21,6/2/2010,BK,,,Palo Alto,-122.1567812,37.440951,False,False,False,,None,41,13872,,"Tree: 30 site 1 at 203 ADDISON AV, on EMERSON ST 21 from pl","<Point><coordinates>-122.156749,37.440958</coordinates></Point>",small_green 3,True,31,,,EMERSON ST,CHANNING AV,ADDISON AV,ADDISON AV,203,,Left,,2,Liquidambar styraciflua,Large Tree Routine Prune,11,1,40-45,15-30,Good,Good,5,,,Good,2,False,False,Planting Strip,,54,6/2/2010,BK,,,Palo Alto,-122.1566921,37.4408948,False,False,False,,Low,42,13872,,"Tree: 31 site 2 at 203 ADDISON AV, on EMERSON ST 54 from pl","<Point><coordinates>-122.156659,37.440902</coordinates></Point>",small_green
Google Fusion allows a user to download the tree data either from a filtered view or the entire spreadsheet. The exported spreadsheet is organized and consistent tabular format. This includes:
Requires: WellFormedCsvCheck, CsvValidation, MissingValueDefinition and UnitMeasureDefinition CellValueMicroSyntax
(Contributed by Eric Stephan)
The purpose of this use case is to illustrate how 3-D molecular structures such as the Protein Data Bank and XYZ formats are conveyed in tabular formats. These files be archived to be used informatics analysis or as part of an input deck to be used in experimental simulation. Scientific communities rely heavily on tabular formats such as these to conduct their research and share each others results in platform independent formats.
The Protein Data Bank (pdb) file format is a tabular file describing the three dimensional structures of molecules held in the Protein Data Bank. The pdb format accordingly provides for description and annotation of protein and nucleic acid structures including atomic coordinates, observed sidechain rotamers, secondary structure assignments, as well as atomic connectivity.
The XYZ file format is a chemical file format. There is no formal standard and several variations exist, but a typical XYZ format specifies the molecule geometry by giving the number of atoms with Cartesian coordinates that will be read on the first line, a comment on the second, and the lines of atomic coordinates in the following lines.
In general this use case also illustrates the utility of CSV as a means for scientists to collect and process their experimental results:
The key characteristics of the XYZ format are:
Requires: WellFormedCsvCheck, CsvValidation, MultipleHeadingRows, PrimaryKey and UnitMeasureDefinition
Below is a Methane molecular structure organized in an XYZ format.
5 methane molecule (in angstroms) C 0.000000 0.000000 0.000000 H 0.000000 0.000000 1.089000 H 1.026719 0.000000 -0.363000 H -0.513360 -0.889165 -0.363000 H -0.513360 0.889165 -0.363000
The key characteristics of the PDB format are:
Requires: GroupingOfMultipleTables
Below is a example PDB file:
HEADER EXTRACELLULAR MATRIX 22-JAN-98 1A3I TITLE X-RAY CRYSTALLOGRAPHIC DETERMINATION OF A COLLAGEN-LIKE TITLE 2 PEPTIDE WITH THE REPEATING SEQUENCE (PRO-PRO-GLY) ... EXPDTA X-RAY DIFFRACTION AUTHOR R.Z.KRAMER,L.VITAGLIANO,J.BELLA,R.BERISIO,L.MAZZARELLA, AUTHOR 2 B.BRODSKY,A.ZAGARI,H.M.BERMAN ... REMARK 350 BIOMOLECULE: 1 REMARK 350 APPLY THE FOLLOWING TO CHAINS: A, B, C REMARK 350 BIOMT1 1 1.000000 0.000000 0.000000 0.00000 REMARK 350 BIOMT2 1 0.000000 1.000000 0.000000 0.00000 ... SEQRES 1 A 9 PRO PRO GLY PRO PRO GLY PRO PRO GLY SEQRES 1 B 6 PRO PRO GLY PRO PRO GLY SEQRES 1 C 6 PRO PRO GLY PRO PRO GLY ... ATOM 1 N PRO A 1 8.316 21.206 21.530 1.00 17.44 N ATOM 2 CA PRO A 1 7.608 20.729 20.336 1.00 17.44 C ATOM 3 C PRO A 1 8.487 20.707 19.092 1.00 17.44 C ATOM 4 O PRO A 1 9.466 21.457 19.005 1.00 17.44 O ATOM 5 CB PRO A 1 6.460 21.723 20.211 1.00 22.26 C ... HETATM 130 C ACY 401 3.682 22.541 11.236 1.00 21.19 C HETATM 131 O ACY 401 2.807 23.097 10.553 1.00 21.19 O HETATM 132 OXT ACY 401 4.306 23.101 12.291 1.00 21.19 O
(Contributed by Tim Finin)
The US National Institute of Standards and Technology (NIST) has run various conferences on extracting information from text centered around challenge problems. Participants submit the output of their systems on an evaluation dataset to NIST for scoring, typically in the form of tab-separated format.
The 2013 NIST Cold Start Knowledge Base Population Task, for example, asks participants to extract facts from text and to represent these as triples along with associated metadata that include provenance and certainty values. A line in the submission format consists of a triple (subject-predicate-object) and, for some predicates, provenance information. Provenance includes a document ID and, depending on the predicate, one or three pairs of string offsets within the document. For predicates that are relations, an optional second set of provenance values can be provided. Each line can also have an optional float as a final column to represent a certainty measure.
The following lines show examples of possible triples of varying length. In the second line, D00124 is the ID of a document and the strings like 283-286 refer to strings in a document using the offsets of the first and last characters. The final floating point value on some lines is the optional certainty value.
{snip} :e4 type PER :e4 mention "Bart" D00124 283-286 :e4 mention "JoJo" D00124 145-149 0.9 :e4 per:siblings :e7 D00124 283-286 173-179 274-281 :e4 per:age "10" D00124 180-181 173-179 182-191 0.9 :e4 per:parent :e9 D00124 180-181 381-380 399-406 D00101 220-225 230-233 201-210 {snip}
The submission format does not require that each line have the same number of columns. The expected provenance information for a triple depends on the predicate. For example, “type” typically has no provenance, “mention” has a document ID and offset pair, and domain predicates like “per:age” have one or two provenance records each of which has a document ID and three offset pairs.
The file format exemplified above opens up for a number of issues described as follows.
Each row is intended to describe an entity (“:e4”) that thus represents a primary key and needs to be identified as such.Requires: PrimaryKey
After each triple, there is a variable number of annotations representing the provenance of the triple and, occasionally, its certainty. This information has to be properly identified and managed.
Requires: AnnotationAndSupplementaryInfo
Entities “:e4”, “:e7” and “:e9” appear to be (foreign key) references to other entities described in this or in external tables. Likewise, also the identifiers “D00124” and “D00101” are ambiguous identifiers. It would be useful to identify the resources that these references represent.
Moreover, “per” appears to be a term from a controlled vocabulary. How do we know which controlled vocabulary it is a member of and what its authoritative definition is?
Requires: ForeignKeyReferences, AssociationOfCodeValuesWithExternalDefinitions, LinksToExternallyManagedDefinitions, SemanticTypeDefinitionThe identifiers used for the entities (“:e4”, “:e7” and “:e9”), as well as those used for the predicates (e.g. “type”, “mention”, “per:siblings” etc.), are ambiguous local identifiers. How can one make the identifier an unambiguous URI? A similar requirement regards the provenance annotations. These are composed by document (e.g. “D00124”) and page number ranges. (e.g. “180-181”). Page number ranges are clearly valid only in the context of the preceding document identifier. The interesting assertion about provenance is the reference (document plus page range). Thus we might want to give the reference a unique identifier comprising from document ID and page range (e.g. D00124#180-181).
Requires: URIMapping
Besides the entities, the table presents also some values. Some of these are strings (e.g. “10”, “Bart”), some of them are probably floating point values (e.g. “0.9”). It would be useful to have an explicit syntactic type definition for these values.
Requires: SyntacticTypeDefinition
Entity “:e4” is the subject of many rows, meaning that many rows can be combined to make a composite set of statements about this entity.
Moreover, a single row in the table comprises a triple (subject-predicate-object), one or more provenance references and an optional certainty measure. The provenance references have been normalised for compactness (e.g. so they fit on a single row). However, each provenance statement has the same target triple so one could unbundle the composite row into multiple simple statements that have a regular number of columns (see the two equivalent examples below).
{snip} :e4 per:age "10" D00124 180-181 173-179 182-191 0.9 :e4 per:parent :e9 D00124 180-181 381-380 399-406 D00101 220-225 230-233 201-210 {snip}
{snip} :e4 per:age "10" D00124 180-181 0.9 :e4 per:age "10" D00124 173-179 0.9 :e4 per:age "10" D00124 182-191 0.9 :e4 per:parent :e9 D00124 180-181 :e4 per:parent :e9 D00124 381-380 :e4 per:parent :e9 D00124 399-406 :e4 per:parent :e9 D00101 220-225 :e4 per:parent :e9 D00101 230-233 :e4 per:parent :e9 D00101 201-210 {snip}
Lastly, since we already observed that rows comprise triples, that there is a frequent reference to externally defined vocabularies, that values are defined as text (literals), and that triples are also composed by entities, for which we aim to obtain a URI (as described above), it may be useful to be able to convert such a table in RDF.
Requires: CsvToRdfTransformation
(Contributed by Jeni Tennison)
NHS Choices makes available a number of (what it calls) CSV files for different aspects of NHS data on its website at http://www.nhs.uk/aboutnhschoices/contactus/pages/freedom-of-information.aspx
One of the files contains information about the locations of care homes. The file has two interesting syntactic features:Requires: WellFormedCsvCheck, SyntacticTypeDefinition, NonStandardFieldDelimiter
I want to be able to embed a map of these locations easily into my web page using a web component, such that I can use markup like:
<emap src="http://media.nhschoices.nhs.uk/data/foi/SCL.csv" latcol="Latitude" longcol="Longitude">and see a map similar to that shown at https://github.com/JeniT/nhs-choices/blob/master/SCP.geojson, without converting the CSV file into GeoJSON.
To make the web component easy to define, there should be a native API on to the data in the CSV file within the browser.
Requires: CsvToJsonTransformation
(Contributed by Jeni Tennison)
All of the data repositories based on the CKAN software, such as data.gov.uk, data.gov, and many others, use JSON as the representation of the data when providing a preview of CSV data within a browser. Server side pre-processing of the CSV files is performed to try and determine column types, clean the data and transform the CSV-encoded data to JSON in order to provide the preview. JSON has many features which make it ideal for delivering a preview of the data, originally in CSV format, to the browser.
Javascript is a hard dependency for interacting with data in the browser and as such JSON was used as the serialization format because it was the most appropriate format for delivering those data. As the object notation for Javascript JSON is natively understood by Javascript it is therefore possible to use the data without any external dependencies. The values in the data delivered map directly to common Javascript types and libraries for processing and generating JSON, with appropriate type conversion, are widely available for many programming languages.
Beyond basic knowledge of how to work with JSON, there is no further burden on the user to understand complex semantics around how the data should be interpreted. The user of the data can be assured that the data is correctly encoded as UTF-8 and it is easily queryable using common patterns used in everyday Javascript. None of the encoding and serialization flaws with CSV are apparent, although badly structured CSV files will be mirrored in the JSON.
Requires: WellFormedCsvCheck, CsvToJsonTransformation
When providing the in-browser previews of CSV-formatted data, the utility of the preview application is limited because the server-side processing of the CSV is not always able to determine the data types (e.g. date-time) associated with data columns. As a result it is not possible for the in-browser preview to offer functions such as sorting rows by date.
As an example, see the Spend over £25,000 in The Royal Wolverhampton Hospitals NHS Trust example. Note that the underlying data begins with:
"Expenditure over £25,000- Payment made in January 2014",,,,,,,, ,,,,,,,, Department Family,Entity,Date,Expense Type,Expense Area,Supplier,Transaction Number,Amount in Sterling, Department of Health,The Royal Wolverhampton Hospitals NHS Trust RL4,31/01/2014,Capital Project,Capital,STRYKER UK LTD,0001337928,31896.06, Department of Health,The Royal Wolverhampton Hospitals NHS Trust RL4,17/01/2014,SERVICE AGREEMENTS,Pathology,ABBOTT LABORATORIES LTD,0001335058,77775.13, ...
The header line here comes below an empty row, and there is metadata about the table in the row above the empty row. The preview code manages to identify the headers from the CSV, and displays the metadata as the value in the first cell of the first row.
Requires: MultipleHeadingRows, AnnotationAndSupplementaryInfo
It would be good if the preview could recognise that the Date column contains a date and that the Amount in Sterling column contains a number, so that it could offer options to filter/sort these by date/numerically.
Requires: SemanticTypeDefinition, SyntacticTypeDefinition, UnitMeasureDefinition
Moreover, some of the values reported may refer to external definitions (from dictionaries or other sources). It would be useful to know where it is possible to find such resources, to be able to properly handle and visualize the data, by linking to them.
Requires: ExternalDataDefinitionResource
Lastly, the web page where the CSV is published presents also useful metadata about it. It would be useful to be able to know and access these metadata even though they are not included in the file.
These include:
Requires: AnnotationAndSupplementaryInfo
(Contributed by Eric Stephan)
NetCDF is a set of binary data formats, programming interfaces, and software libraries that help read and write scientific data files. NetCDF provides scientists a means to share measured or simulated experiments with one another across the web. What makes NetCDF useful is its ability to be self describing and provide a means for scientists to rely on existing data model as opposed to needing to write their own. The classic netCDF data model consists of variables, dimensions, and attributes. This way of thinking about data was introduced with the very first netCDF release, and is still the core of all netCDF files.
Among the tools available to the NetCDF community, two tools: ncdump and ncgen. The ncdump tool is used by scientists wanting to inspect variables and attributes (metadata) contained in the NetCDF file. It also can provide a full text extraction of data including blocks of tabular data representing by variables. While NetCDF files are typically written by a software client, it is possible to generate NetCDF files using ncgen and ncgen3 from a text format. The ncgen tool parses the text file and stores it in a binary format.
Both ncdump and ncgen rely on a text format to represent the NetCDF file called network Common Data form Language (CDL). The CDL syntax as shown below contains annotation along with blocks of data denoted by the "data:" key. For the results to be legible for visual inspection the measurement data is written as delimited blocks of scalar values. As shown in the example below CDL supports multiple variables or blocks of data. The blocks of data while delimited need to be thought of as a vector or single column of tabular data wrapped around to the next line in a similar way that characters can be wrapped around in a single cell block of a spreadsheet to make the spreadsheet more visually appealing to the user.
netcdf foo { // example netCDF specification in CDL dimensions: lat = 10, lon = 5, time = unlimited; variables: int lat(lat), lon(lon), time(time); float z(time,lat,lon), t(time,lat,lon); double p(time,lat,lon); int rh(time,lat,lon); lat:units = "degrees_north"; lon:units = "degrees_east"; time:units = "seconds"; z:units = "meters"; z:valid_range = 0., 5000.; p:_FillValue = -9999.; rh:_FillValue = -1; data: lat = 0, 10, 20, 30, 40, 50, 60, 70, 80, 90; lon = -140, -118, -96, -84, -52; }
The next example shows a small subset of data block taken from an actual NetCDF file. The blocks of data while delimited need to be thought of as a vector or single column of tabular data wrapped around to the next line in a similar way that characters can be wrapped around in a single cell block of a spreadsheet to make the spreadsheet more visually appealing to the user.
data: base_time = 1020770640 ; time_offset = 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202, 204, 206, 208, 210, 212, 214, 216, 218, 220, 222, 224, 226, 228, 230, 232, 234, 236, 238, 240, 242, 244, 246, 248, 250, 252, 254, 256, 258, 260, 262, 264, 266, 268, 270, 272, 274, 276, 278, 280, 282, 284, 286, 288, 290, 292, 294, 296, 298, 300, 302, 304, 306, 308, 310, 312, 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, 348, 350, 352, 354, 356, 358, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386, 388, 390, 392, 394, 396, 398, 400, 402, 404, 406, 408, 410, 412, 414, 416, 418, 420, 422, 424, 426, 428, 430, 432, 434, 436, 438, 440, 442, 444, 446, 448, 450, 452, 454, 456, 458, 460, 462, 464, 466, 468, 470, 472, 474, 476, 478, 480, 482, 484, 486, 488, 490, 492, 494, 496, 498, 500, 502, 504, 506, 508, 510, 512, 514, 516, 518, 520, 522;
The format allows for error codes and missing values to be included.
Requires: WellFormedCsvCheck, CsvValidation, UnitMeasureDefinition, MissingValueDefinition GroupingOfMultipleTables
Lastly, NetCDF files are typically collected together in larger datasets where they can be analyzed, so the Csv data can be thought of a subset of a larger dataset.
Requires: CsvAsSubsetOfLargerDataset
Requires: AnnotationAndSupplementaryInfo
(Contributed by David Booth and Jeremy Tandy)
CSV is by far the commonest format within which open data is published, and is thus typical of the data that application developers need to work with.
However, an object / object graph serialisation (of open data) is easier to consume within software applications. For example, web applications (using HTML5 & Javascript) require no extra libraries to work with data in JSON format. Similarly, RDF-encoded data in from multiple sources can be simply combined or merged using SPARQL queries once persisted within a triple store.
The UK Government policy paper "Open Data: unleashing the potential" outlines a set of principles for publishing open data. Within this document, principle 9 states:
Release data quickly, and then work to make sure that it is available in open standard formats, including linked data formats.
The open data principles recognise how the additional utility to be gained from publishing in linked data formats must be balanced against the additional effort incurred by the data publisher to do so and the resulting delay to publication of the data. Data publishers are required to release data quickly - which means making the data available in a format convenient for them such as CSV dumps from databases or spread sheets.
One of the hindrances to publishing in linked data formats is the difficulty in determining the ontology or vocabulary (e.g. the classes, predicates, namespaces and other usage patterns) that should be used to describe the data. Whilst it is only reasonable to assume that a data publisher best knows the intended meaning of their data, they cannot be expected to determine the ontology or vocabulary most applicable to to a consuming application!
Furthermore, in lieu of agreed de facto standard vocabularies or ontologies for a given application domain, it is highly likely that disparate applications will conform to different data models. How should the data publisher choose which of the available vocabularies or ontologies to use when publishing (if indeed they are aware of those applications at all)!
In order to assist data publishers provide data in linked data formats without the need to determine ontologies or vocabularies, it is necessary to separate the syntactic mapping (e.g. changing format from CSV to JSON) from the semantic mapping (e.g. defining the transformations required to achieve semantic alignment with a target data model).
As a result of such separation, it will be possible to establish a canonical transformation from CSV conforming to the core tabular data model to an object graph serialisation such as JSON.
Requires: WellFormedCsvCheck, CsvToJsonTransformation, CanonicalMappingInLieuOfAnnotation
This use case assumes that JSON is the target serialisation for application developers given the general utility of that format. However, by considering [json-ld], it becomes trivial to map CSV-encoded tabular data via JSON into a canonical RDF model. In doing so this enables CSV-encoded tabular data to be published in linked data formats as required in the open data principle 9 at no extra effort to the data publisher as standard mechanisms are available for a data user to transform the data from CSV to RDF.
Requires: CsvToRdfTransformation
In addition, open data principle 14 requires that:
Public bodies should publish relevant metadata about their datasets […]; and they should publish supporting descriptions of the format, provenance and meaning of the data.
To achieve this, data publishers need to be able to publish supplementary metadata concerning their tabular datasets, such as title, usage license and description.
Requires: AnnotationAndSupplementaryInfo
Applications may automatically determine the data type (e.g. date-time, number) associated
with fields in a CSV file by parsing the data values. However, on occasion, this is prone to
mistakes where data appears to resemble something else. This is especially
prevalent for dates. For example, 1/4
is often confused with 1 April
rather than 0.25
. In such situations, it is beneficial if guidance can be given to the
transformation process indicating the data type for given columns.
Requires: SyntacticTypeDefinition
Provision of CSV data coupled with a canonical mapping provides significant utility by itself. However, there is nothing stopping a data publisher from adding annotation defining data semantics once, say, an appropriate de facto standard vocabulary has been agreed within the community of use. Similarly, a data consumer may wish to work directly with the canonical mapping and wish to ignore any semantic annotations provided by the publisher.
This use case lacks a concrete example to further illustrate the concerns. Whilst the concerns outlined above are relevant for a gamut of open data publication, it would be useful to include an example dataset here - which might be subsequently used in test suites relating to the motivating requirements of this use case.
It may be appropriate to merge this use case with UC-IntelligentlyPreviewingCSVFiles as that is concerned with using an interim JSON encoding to support intelligent preview of unannotated tabular data.
(Contributed by Davide Ceolin and Valentina Maccatrozzo)
In the ESWC-14 Challenge: Linked Open Data-enabled Recommender Systems, participants are provided with a series of datasets about books in TSV format.
A first dataset contains a set of user identifiers and their ratings for a bunch of books each. Each book is represented by means of a numeric identifier.
DBbook_userID DBbook_itemID rate {snip} 6873 5950 1 6873 8010 1 6873 5232 1 {snip}
Ratings can be boolean (0,1) or Likert scale values (from 1 to 5), depending on the challenge task considered.
Requires: SyntacticTypeDefinition, SemanticTypeDefinition, NonStandardFieldDelimiter
A second file provides a mapping between book ids and their names and dbpedia URIs:
DBbook_ItemID name DBpedia_uri {snip} 1 Dragonfly in Amber http://dbpedia.org/resource/Dragonfly_in_Amber 10 Unicorn Variations http://dbpedia.org/resource/Unicorn_Variations 100 A Stranger in the Mirror http://dbpedia.org/resource/A_Stranger_in_the_Mirror 1000 At All Costs http://dbpedia.org/resource/At_All_Costs {snip}
Requires: ForeignKeyReferences
Participants are requested to estimate the ratings or relevance scores (depending on the task) that users would attribute to a set of books reported in an evaluation dataset:
DBbook_userID DBbook_itemID {snip} 6873 5946 6873 5229 6873 3151 {snip}
Requires: LinksToExternallyManagedDefinitions, R-AssociationOfCodeValuesWithExternalDefinitions
The challenge mandates the use of Linked Open Data resources in the recommendations.
An effective manner to satisfy this requirement is to make use of undirected semantic paths. An undirected semantic path is a sequence of entities (subject or object) and properties that link two items, for instance:
{Book1 property1 Object1 property2 Book2}
This sequence results from considering the triples (subject-predicate-object) in a given Linked Open Data resource (e.g. DBpedia), independently of their direction, such that the starting and the ending entities are the desired items and that the subject (or object) of a triple is the object (or subject) of the following triple. For example, the sequence above may result from the following triples:
Book1 property1 Object1 Book2 property1 Object1
Undirected semantic paths are classified according to their length. Fixed a length, one can extract all the undirected semantic paths of that length that link two items within a Linked Open Data resource by running a set of SPARQL queries. This is necessary because an undirected semantic path actually corresponds to the union of a set of directed semantic paths. In the source, data are stored in terms of directed triples (subject-predicate-object).
The number of queries that is necessary to run in order to obtain all the undirected semantic paths that link to items is exponential of the length of the path itself (2n). Because of the complexity of this task and of the possible latency times deriving from it, it might be useful to cache these results.
CSV is a good candidate for caching undirected semantic paths, because of its ease of use, sharing, reuse. However, there are some open issues related to this. First, since paths may present a variable number of components, one might want to represent paths in a single cell, while being able to separate the path elements when necessary.
For example, in this file, undirected semantic paths are grouped by means of double quotes, and path components are separated by commas. The starting and ending elements of the undirected semantic paths (Book1 and Book2) are represented in two separate columns by means of the book identifiers used in the challenge (see the example below).
Book1 Book2 Path {snip} 1 7680 "http://dbpedia.org/ontology/language,http://dbpedia.org/resource/English_language,http://dbpedia.org/ontology/language" 1 2 "http://dbpedia.org/ontology/author,http://dbpedia.org/resource/Diana_Gabaldon,http://dbpedia.org/ontology/author" 1 2 "http://dbpedia.org/ontology/country,http://dbpedia.org/resource/United_States,http://dbpedia.org/ontology/country" {snip}
Requires: CellValueMicroSyntax
Second, the size of these caching files may be remarkable. For example, the size of this file described above is ~2GB, and that may imply prohibitive loading times, especially when making a limited number of recommendations.
Since rows are sorted according to the starting and the ending book of the undirected semantic path, then all the undirected semantic paths that link two books are present in a region of the table formed by consecutive rows.
By having at our disposal an annotation of such regions indicating which book they describe, one might be able to select the "slice" of the file he needs to make a recommendation, without having to load it entirely.
Requires: AnnotationAndSupplementaryInfo, RandomAccess
In order to automate the parsing of information published in CSV form, it is essential that that content be well-formed with respect to the syntax for tabular data.
Motivation: DigitalPreservationOfGovernmentRecords, OrganogramData, ChemicalImaging, ChemicalStructures, UC-NetCdFcDl, UC-PaloAltoTreeData, CanonicalMappingOfCSV, IntelligentlyPreviewingCSVFiles
Row headings should be distinguished from file headings (if present). Also, in case subheadings are present, it should be possible to define their coverage (i.e. how many columns they refer to).
Motivation: PublicationOfNationalStatistics, AnalyzingScientificSpreadsheets, IntelligentlyPreviewingCSVFiles
Heading columns, if present, should be distinguished from the data columns.
Motivation: AnalyzingScientificSpreadsheetsTables could presented in normal form or not. We should be able to handle both and to transform one in the other, when necessary.
Motivation: RepresentingEntitiesAndFactsExtractedFromTextCell values may represent more complex data structures for a given column such as lists and time stamps. If present parsers should have the option of handling the MicroSyntax or ignoring it and treating it as a scalar value.
Motivation: JournalArticleSearch, PaloAltoTreeData, SupportingSemantic-basedRecommendations,
)
Tabular data is often provided with field delimiters other than comma (,
).
Fixed width field formatting is also commonly used.
If a non-standard field delimiter is used, it shall be possible to inform the CSV parser about the field delimiter or fixed-width formatting.
Motivation: DisplayingLocationsOfCareHomesOnAMap, SurfaceTemperatureDatabank, SupportingSemantic-basedRecommendations
Each row within a CSV file typically relates to a single entity. In many cases that entity is the object of references from other entities described within the CSV file - or perhaps even from entities described in other CSV files or data resources.
Typically within a CSV file, primary key identifiers are only unique within the scope of the CSV file within which they are stated (e.g. a local identifier). In order for the entity to be unambiguously identified, the local identifier needs to be converted to a URI (as defined in [RFC3986]).
Assumption that a row within a CSV file describes a single entity for which a primary key can be assigned.
Motivation: DigitalPreservationOfGovernmentRecords, OrganogramData, PublicationOfPropertyTransactionData, RepresentingEntitiesAndFactsExtractedFromText, ChemicalImaging, ChemicalStructures
To interpret data in a given row of a CSV file, need to be able to refer to information provided in supplementary CSV files or elsewhere within the same CSV file; e.g. using a foreign key type reference. The cross-referenced CSV files may, or may not, be packaged together.
Motivation: DigitalPreservationOfGovernmentRecords, OrganogramData, SurfaceTemperatureDatabank, RepresentingEntitiesAndFactsExtractedFromText, SupportingSemantic-basedRecommendations
To allow automated processing of a CSV file additional metadata is required to describe the structure and semantics of that file. This additional metadata is termed a Data Definition Resource (DDR). The DDR may be defined outside the scope of the CSV file with which it is associated; for example, if the DDR is common to many CSV files or the DDR is used to drive CSV file validation. In such cases it must be possible to associate the DDR from the CSV file.
Motivation: DigitalPreservationOfGovernmentRecords, RepresentingEntitiesAndFactsExtractedFromText, IntelligentlyPreviewingCSVFiles
Annotations and supplementary information may be associated with:
Annotations and supplementary information may be literal values or references to a remote resource. The presence of annotations or supplementary information must not adversely impact parsing of the tabular data (e.g. the annotations and supplementary information must be logically separate).
Motivation: PublicationOfNationalStatistics, SurfaceTemperatureDatabank, PublicationOfPropertyTransactionData, AnalyzingScientificSpreadsheets, ReliabilityAnalysesOfPoliceOpenData, OpenSpendingData, RepresentingEntitiesAndFactsExtractedFromText, IntelligentlyPreviewingCSVFiles, CanonicalMappingOfCSV, SupportingSemantic-basedRecommendations
CSV files make frequent use of code values when describing data. Examples include: geographic regions, status codes and category codes. It is difficult to interpret the tabular data with out an unambiguous definition of the code values used.
It must be possible to unambiguously associate the notation used within a CSV file with the appropriate external definition.
We cannot assume that the publisher of the CSV file will use a URI to reference the code value; most likely they will use a local identifier that is unique within the scope of a particular code list. For example, the Land Registry use the codes "A", "C" and "D" to denote their transactions rather than a fully qualified URI reference to the concept that these codes identify.
Thus the requirement here is two fold:
Motivation: PublicationOfNationalStatistics, PublicationOfPropertyTransactionData, SurfaceTemperatureDatabank, RepresentingEntitiesAndFactsExtractedFromText, SupportingSemantic-basedRecommendations
description to be added here
Motivation: SurfaceTemperatureDatabank, PublicationOfPropertyTransactionData, ChemicalImaging, ChemicalStructures, UC-NetCdFcDl
description to be added here
Motivation: SurfaceTemperatureDatabank, OpenSpendingData, RepresentingEntitiesAndFactsExtractedFromText, SupportingSemantic-basedRecommendations
supporting automated recognition syntactic type e.g. date, number etc. ... further description to be added
Motivation: SurfaceTemperatureDatabank, DigitalPreservationOfGovernmentRecords, ReliabilityAnalysesOfPoliceOpenData, AnalyzingScientificSpreadsheets, RepresentingEntitiesAndFactsExtractedFromText, DisplayingLocationsOfCareHomesOnAMap, IntelligentlyPreviewingCSVFiles, CanonicalMappingOfCSV, SupportingSemantic-basedRecommendations
supporting automated recognition of semantic type, typically expressed for each column ... further description to be added
To express semantics in a machine readable form, RDF seems the appropriate choice. Furthermore, best practice indicates that one should adopt common and widely adopted patterns (e.g. RDF vocabularies, OWL ontologies) when publishing data to enable a wide audience to consume and understand the data. Existing (de facto) standard patterns may add complexity when defining the semantics associated with a particular row such that a single RDF predicate is insufficient.
For example, to express a quantity value using QUDT
we use an instance of qudt:QuantityValue
to relate the numerical value
with the quantity kind (e.g. air temperature) and unit of measurement (e.g.
Celsius). Thus the semantics needed for a column containing temperature values might
be: qudt:value/qudt:numericValue
– more akin to a LDPath.
Furthermore, use of OWL axioms when defining a sub-property of
qudt:value
would allow the quantity type and unit of measurement to
be inferred, with the column semantics then being specified as
ex:temperature_Cel/qudt:numericValue
.
Motivation: DigitalPreservationOfGovernmentRecords, PublicationOfNationalStatistics, SurfaceTemperatureDatabank, ReliabilityAnalysesOfPoliceOpenData, AnalyzingScientificSpreadsheets, RepresentingEntitiesAndFactsExtractedFromText, IntelligentlyPreviewingCSVFiles, SupportingSemantic-basedRecommendations
Significant amounts of existing tabular text data include values such as
-999
. Typically, these are outside the normal expected range of values
and are meant to infer that the value for that cell is missing. Automated parsing of
CSV files needs to recognise such missing value tokens and behave accordingly.
Furthermore, it is often useful for a data publisher to declare why a value
is missing; e.g. withheld
or aboveMeasurementRange
Motivation: SurfaceTemperatureDatabank, OrganogramData, OpenSpendingData, UC-NetCdFcDl, UC-PaloAltoTreeData
Possibly partially covered by R-ForeignKeyReferences, although here we do not aim at linking two CSV files, and to R-AssociationOfCodeValuesWithExternalDefinitions and R-ExternalDataDefinitionResource, although here we talk explicitly about URIs.
Motivation: AnalyzingScientificSpreadsheets, RepresentingEntitiesAndFactsExtractedFromText
Motivation: AnalyzingScientificSpreadsheets, OpenSpendingData, IntelligentlyPreviewingCSVFiles, ChemicalImaging, ChemicalStructures, UC-NetCdFcDl, UC-PaloAltoTreeData
When publishing sets of related data tables, it shall be possible to provide annotation for the group of related tables. Annotation concerning a group of tables may include summary information about the composite dataset (or "group") that the individual tabular datasets belong too, such as the license under which the dataset is made available.
The implication is that the group shall be identified as an entity in its own right, thus enabling assertions to be made about that group. The relationship between the group and the associated tabular datasets will need to be made explicit.
Furthermore, where appropriate, it shall be possible to describe the interrelationships between the tabular datasets within the group.
The tabular datasets comprising a group need not be hosted at the same URL. As such, a group does not necessarily to be published as a single package (e.g. as a zip) - although we note that this is a common method of publication.
Motivation: PublicationOfNationalStatistics, OrganogramData, ChemicalStructures, UC-NetCdFcDl
The content of a CSV often needs to be validated for conformance against a specification (e.g. a DDR).
Validation shall assess conformance against structural definitions such as number of columns and the datatype for a given column. Further validation needs are to be determined. It is anticipated that validation may vary based on row-specific attributes such as the type of entity described in that row.
Dependency: R-WellFormedCsvCheck
Motivation: DigitalPreservationOfGovernmentRecords, OrganogramData, ChemicalImaging, ChemicalStructures, DisplayingLocationsOfCareHomesOnAMap, UC-NetCdFcDl, UC-PaloAltoTreeDataStandardised CSV to RDF transformation mechanisms mitigate the need for bespoke
transformation software to be developed by CSV data consumers, thus simplifying the
exploitation of CSV data. Identifiers used as primary and foreign keys within a CSV
file need to be converted to URIs. RDF properties (or property paths) need to
be determined to relate the entity described within a given row to the corresponding
data values for that row. Where available, the type of a data value should be
incorporated in the resulting RDF. Built-in types defined in [rdf11-concepts] (e.g.
xsd:dateTime
,
xsd:integer
etc.) and types defined in other RDF vocabularies / OWL ontologies (e.g. geo:wktLiteral
from GeoSPARQL) shall
be supported.
Dependency: R-SemanticTypeDefinition, R-SyntacticTypeDefinition, R-PrimaryKey
Motivation: DigitalPreservationOfGovernmentRecords, OrganogramData, PublicationOfPropertyTransactionData, RepresentingEntitiesAndFactsExtractedFromText, CanonicalMappingOfCSV
Standardised CSV to JSON transformation mechanisms mitigate the need for bespoke transformation software to be developed by CSV data consumers, thus simplifying the exploitation of CSV data
Motivation: DisplayingLocationsOfCareHomesOnAMap, CanonicalMappingOfCSV, IntelligentlyPreviewingCSVFiles
Apparently, we have no use case requiring a CSV to XML conversion, yet.
Establish a canonical mapping from CSV conforming with the core tabular data model, yet lacking any annotation that defines rich semantics for that data, into an object / object graph serialisation such as JSON or RDF.
The canonical mapping should provide automatic scoping of local identifiers (e.g. conversion to URI), identification of primary keys and detection of data types.
Motivation: CanonicalMappingOfCSV
Large datasets may be hard to process in a sequential manner. It may be useful to have the possibility to directly access part of them, possibly by means of a pointer to a given row, cell or region.
Motivation: SupportingSemantic-basedRecommendationsCSV+ format should be compatible, at least at a basic level, with the data analysis tools in common usage. At a minimum, existing tools should be able to interpret CSV+ as though it were CSV (as defined in [RFC4180]).
Whilst we have a list of CSV tools emerging on the wiki we are yet to define the list of tools with which we expect, or aspire, to retain compatibility with.
We also do not have an authoritative set of conformance tests yet.
Orphaned requirement as a result of re-editing use case: PublicationOfNationalStatistics
This use case used to assert:
Statistical data is currently published in Microsoft Excel Workbook format in order overcome limitations inherent in simple data formats such as CSV. However, it is also important to remember that the data user community also have preferred tools for consuming the statistical data. Spreadsheet applications such as Microsoft Excel, Numbers for Mac and LibreOffice Calc are a mainstay of the ubiquitously deployed desktop productivity software, enabling a large swath of users to work with data provided in Microsoft Excel Workbook format. It is important that compatibility, at least at a basic level, is maintained with the data analysis tools in common usage.
description to be added here
Motivation: PublicationOfNationalStatistics, SurfaceTemperatureDatabank