Credibility Signals

W3C Editor's Draft

This version:
https://credweb.org/signals-20181021
Latest editor's draft:
https://credweb.org/signals
Editor:
TBD (initial version by Sandro Hawke)
Participate:
GitHub w3c/credweb
File a bug
Commit history
Pull requests

Abstract

This document specifies various types of information, called credibility signals, which are considered potentially useful in assessing credibility of online information.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Credible Web Community Group as an Editor's Draft.

GitHub Issues are preferred for discussion of this specification.

Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 1 February 2018 W3C Process Document.

1. Introduction

1.1 Purpose

This document is intended to support an ecosystem of interoperable credibility tools.  These software tools, which may be components of familiar existing systems, will gather, process, and use relevant data to help people more accurately decide what information they can trust online and protect themselves from being misled. We expect that an open data-sharing architecture will facilitate efficient research and development, as well as an overall system which is more visibly trustworthy.

The document has three primary audiences:

  1. Software developers and computer science researchers wanting to build systems which work with credibility data.  For them, the document aims to be a precise technical specification, stating what they need for their software to interoperate with any other software which conforms to this specification.
  2. People who work in journalism and want to review and contribute to this technology sphere, to help make sure it is beneficial and practical.
  3. Non-computer-science researchers, interested in helping develop and improve the science behind this work.

In general, we intend for this document to be:

1.2 Credibility Data

The document builds on concepts and terminology explained in Technological Approaches to Improving Credibility Assessment on the Web.  Our basic model is that an entity (human and/or machine) is attempting to make a credibility assessment — to predict whether something will mislead them or others — by carefully examining many different observable features of that thing and things connected with it, as well as information provided by various related or trusted sources.

To simplify and unify this complex situation, with its many different roles, we model the situation as a set of observers, each using imperfect instruments to learn about the situation and then recording their observations using simple declarative statements agreed upon in advance. Because those statements are inputs to a credibility assessment process, we call them credibility signals.  (The term credibility indicators is sometimes also used.)

This document, then, is a guide to these signals.  It states what each observer might say and exactly how to say it, along with other relevant information to help people choose among the possible signals and understand what it means when they are used.

Because this is a new and constantly-changing field, we do not simply state which signals should be used.  Instead, we list possible signals that one might reasonably consider using, along with information we expect to be helpful in making the decision.

1.3 Example

[explain]

Assessing credibility of https://news.example/article-1

   Looking at title

      I consider it to be clickbait

      It's clickbait because it's a cliffhanger

   Looking at article

      It cites scientific research

   Looking at provider

      Established in 1974

      Owned domain since 2006

1.4 Factors in Selecting Signals

When building systems which use credibility signals and trying to decide which signals to use, there are different factors to weigh.  This section is aspirational; we hope this document will in time provide guidance on all these factors.  

1.4.1 Measurement Challenges

There are factors about how difficult it is to get an accurate value for the signal:

  1. Do people independently observing it get approximately the same value?  
  2. Do observations vary with the culture, location, language, age, beliefs, etc, of the people doing the observation?
  3. Would the same people make the same observation in future months or years?
  4. How much time and effort does it take people to make the observation?
  5. Do people need to be trained to make this specific observation?
  6. What kind of general training do people need (eg a journalism degree) to do it?
  7. How do machines compare to humans in making this observation, in terms of cost, quality, types of errors, and susceptibility to being tricked.

Many of these factors can be measured using inter-rater reliability (IRR) techniques.  When studies have made such measurements, our intent is to include that data in this document.

1.4.2 Value in Credibility Assessment

Another important set of factors relates to how useful the measurement is in assessing credibility, assuming the observation itself is accurate.

  1. Does the signal have a strong correlation to content accuracy, itself determined by consensus among experts?
  2. Is it particularly indicative of credibility when used in combination with other signals?  (For example, as part of computing the value of a latent variable.)
  3. Is it conceptually easy for people to understand?
  4. Do professionals in the field think it's likely to be a useful signal?
  5. How dependent are these characteristics on the culture or time period being considered?
  6. How dependent are these characteristics on the subject matter of the information being assessed for credibility?

1.4.3 Feedback Risks (“Gameability”)

One should also consider how the overall ecosystem of content producers and consumers might be changed by credibility tools adopting the signal. Once attackers see it’s being used, a signal that works well today might stop working, or even be used to make things worse. See Feedback Risks.

  1. Is it disproportionately useful for attackers (eg viral call to action) ?  If so, making this a negative credibility signal should generally be beneficial
  2. Is it disproportionately expensive for attackers (eg journalistic language) ?  If so, making this a positive credibility signal should generally be beneficial.
  3. Who might get impacted by “friendly fire”?  Even if adopting a signal might — on average — harm attackers more than everyone else, certain individuals or communities who have done nothing wrong might be penalized.  Tradeoffs must be carefully made, ideally in a consensus process with the impacted people.

1.4.4 Interoperability

The value of sharing signal data depends on how that signal is used by other systems.

  1. Are others producing data using this signal?
  2. Are there useful data sets available?
  3. Are others consuming data, paying attention to this signal?
  4. Are there tools which work with it, eg running statistics?
  5. Is the definition clear and unambiguous, so people using it mean the same thing?
  6. Are there clear examples?
  7. Is there an open history of commentary, with questions and answers, and issues being addressed by various implementers?
  8. Is documentation available in multiple languages?
  9. If the definition is under development, how can one participate?
  10. If the definition could possibly change, who might change it, and under what circumstances?
  11. Are there any intellectual property considerations? See W3C Patent Policy.
  12. Is there a test suite / validation system for helping confirm that an implementation is working properly?
  13. Are there implementation reports, confirming that tools are functioning properly, according to the testing system? (For an example, see ActivityPub).

1.5 Publishing Credibility Data

TBD, basically follow schema.org technique using json-ld.

1.6 Consuming Credibility Data

TBD, point to some tools and the relevant specs.  Basically JSON-LD.

1.7 Organization of this document

Section 1 (“Introduction”) provides instructions for how to use and help maintain this document, along with general background information.

The rest of this document, after the introduction, is a list of signals and information about them, as discussed in the introduction.  The signals are organized into related groups, in hierarchical sections.  The lowest level of the hierarchy is the signals themselves; all the higher levels provide grouping of the signals, to help people understand them.

One important level of the hierarchy identifies the subject type of the signal.  This is the conceptual entity being examined, considered, or inspected, when one makes the observation being recorded in the signal data.  This could be imagined in different ways: when you are observing a claim made in the 3rd paragraph of an article published in some newspaper, are you observing the claim, the paragraph, the article, the newspaper, or even the author of the article?  In general, we aim for the smallest granularity that makes sense, which in this case would probably be the claim.

At times, it may not be obvious to which subject type a signal belongs, or it could sensibly belong with several different ones.  In this case, it might be moved around in the document as people come to understand it better.  When it’s not clear, there should be links from the places a signal could reasonably be to the place it actually is.

this may require discussion, and might remain open for debate.  When a signal or group of signals makes sense in two places, consider linking it from the places it isn’t, to help people find it.

In many cases, a signal could be seen as a set of similar signals which are not strictly identical. This can be handled by adding additional signal headings with the finer distinction, when necessary. In this case, template statements might appear under more than one signal.

Note that sections may be moved and renumbered.  Do not rely on section numbers remaining the same.  For linking to a part of the document, consider using the gdocs h.xxxxxx fragment ids, provided by the Table of Contents; those should remain stable.  Also, whenever changing a heading, especially a signal heading, if someone might be referring to it by name, please move there old text into a paragraph starting “Also called:”.  

1.8 Template Statements

The most important thing about a signal definition is to be clear what observation the signal data is recording. If the signal heading is “Article length”, does that mean length in words or bytes or characters or some other metric? Does it include the title? For each signal, we want an easy way to communicate its definition that is short but clear, while being as detailed as necessary.

The technique we use here is to express the semantics of the signal using plain and simple sentences in natural language which convey the same knowledge as the signal data. If you imagine people using credibility software exchanging these statements (perhaps in text messages or on Twitter), you should get the right semantics. You can assume metadata, like who sent it and when it was sent is available, so the statements include terms like “I” and “now”.

For machine-to-machine data Interoperability, these template sentences and the signal heading are turned into a data schema, after which the JSON-LD/schema.org/sematic web/linked data technology stack can be used.

The statements we use are templates because they abstract over a variety of similar sentences which differ in specific limited ways.  For example, these statements:

  1. I have examined the article at https://example.com/alice and find it highly credible
  2. I have examined the article at https://example.com/brian and find it highly credible
  3. I have examined the article at https://example.com/casey and find it highly credible

are all the same, except in the URL.   We convey this using a template statement, which has a variable portion in square brackets, like:

Tech note

If we (automatically or manually) map this template to a property with the pname :iHaveExaminedHighlyCredible, then the sentence number 2 above would be encoded in turtle as

  • { <https://example.com/brian> :iHaveExaminedHighlyCredible true }.

Alternatively, we could make it a class, but boolean valued properties may be better, so that all signals remain as properties..

The bracketed template expression “[subject]” is is required in every template, to indicate what entity is being observed.  Additional bracket expressions can be used when there are other elements of the statement to make variable.  In particular, [string] (for text in quotes) and [number].

(For now, try to just use those three.  Software and documentation is being developed to allow more features. If you find this too restrictive, go ahead and write something else inside the square brackets and we'll deal with it later, but include a question mark so it's clear you knew you were making it up.)

An example needing multiple variables:

  1. https://example.com/alice took 4.75 seconds to load, just now.
  2. https://example.com/brian took 5.9 seconds to load, just now.

could be matched by:

1.9 Instructions for editing this document

As an experiment, this document is currently set so everyone can edit it, like Wikipedia. It is the Google docs version that is editable. We suggest you change the “Editing Mode” to “Suggesting” (using the pencil icon in the upper-right) until you are quite familiar with this document. You may also comment using the usual Google Docs commenting features.

If you make or suggest any edits to this document, you are agreeing to the W3C Community Contributor License Agreement which has significant copyright and patent implications.

The subsections below give some advice for how to make edits which are helpful.

1.9.1 Expand discussion

Each section should begin with a short introduction written with a neutral point of view, reflecting consensus about why the signal might be useful and what the risks might be. To enable consensus among a broad community, the intent is for this text to be developed iteratively, with each contributor adding their perspective while respecting what is already present.

Questions and minor concerns should generally be added as annotations using the “Add a Comment” function, without editing the document. If they become issues requiring back-and-forth discussion, they should be turned into github issues and linked from the most relevant place in this document with a paragraph starting “Issue:”

These discussion sections are intended to be nonnormative. That is, they do not say how software using the signal is required to behave for interoperability. The normative content of this specification is the template statements and the mapping of the statements to RDF.

1.9.2 Add new template statements

If you are confident you understand what a signal is intended to measure, and think you can provide a template statement which expresses it more clearly and simply, with little ambiguity, please add a new row to the bottom of the “Proposed template statements” table and add your entry.  Please also put the next higher number in the Key field for reference, and your name in the By field. This “by” field is optional; it is intended to help simplify discussion, telling people who to talk to, and to give some credit. Listing the name of a large group in this field is not particularly useful.  

After adding an entry, for a short time (perhaps a few hours, guided by any comments on it) it’s okay to edit it if you change your mind. After that, please leave it, and just add a new row for the new version. You can put new versions in the middle of the table and use keys like 1a.

1.9.3 Add new signals

Once you’re familiar with the structure of this document and all the signals in your area of interest, you may add new signal sections at heading level 3, or even new group sections at heading level 2.  (For heading numbering, you can use the “Table of contents” add-on from LumApps to number the headers, do it by hand, or just leave the numbering for someone else using the add-on.)

When you add a new signal, please copy this table to the new section, and then fill in at least one row to clarify what the signal data conveys.

Key

Proposed Template Statement

By

1.10 Contributors

Folks who add content to this document are encouraged to add themselves in this section, potentially with some affiliation & credential information.  This also allows the “By” column to stay short, as people can use short forms of names (eg only first or last name, if unique in this doc).

2. Subject type: Claim

This section is for signals about claims.

A claim is “an assertion that is open to disagreement; equivalently, a meaningful declarative sentence which is logically either true or false (to some degree); equivalently, a proposition in propositional logic.” [credweb report]

Claims can be stated (with various decree of clarity) in some content or implied by the content (even non-textual content, like a photograph).

Claims are usually the smallest practical granularity. Credibility data about claims is largely focussed on what other sources have said about that claim, as in fact checking, but could also involve relationships between claims and textual analysis of claim text.

2.1 Claim Review

The “ClaimReview” model developed at schema.org grows out of the tradition of independent, external fact-checking, as in PolitiFact.  With this model, a fact-checker reviews a claim, typically made by a public figure, and then publishes a review of that claim, a “claim review”. Within schema.org, this parallels other reviews, like restaurant reviews.

[ Can we fit claimreview neatly into this observer/signal model?  It’s a bit of a stretch.  TBD. ]

3. Subject type: Text

Includes: phrase, sentence, paragraph, document, document fragment

A text, in this sense, is a sequence of words, with the usual punctuation, and sometimes embedded multimedia content or meaningful layout, like tables.  That is, it’s a document or portion of a document. As examples, a phrase, sentence, paragraph, document section, book chapter, book, and complete book series would typically each count as a text.  

Signals here concern properties of the text, itself, separate from how it might be published (eg on a Web Page, on a billboard, spoken at a rally) or where it might be published (in some Venue).  The text should be considered immutable: a text (in this sense) doesn't change.  If you take a text and change it, you are making a new text, which needs to be reexamined, to see which observations (and thus which signal data) applies to this other, new text.

Issue: (tech) How to represent texts in RDF?  Options include annotation URL with secure hash, annotation object URL with secure hash, data: URI, etc.

4. Subject type: Image

Includes: Picture, Photograph, Drawing, Illustration

5. Subject type: Audio

Also called: Audio Clip, Sound Clip, Audio Recording

6. Subject type: Video

Also called: Video Clip, Video Recording, Movie

7. Subject type: Article

Includes: News Story, News Article, Scientific Paper, Blog Post

An article is a collection of information intended to convey some information, usually factual, usually created by one or more identifier people, and usually released at a specific point in time in some venue. It consists of elements like a body, a title, a publication date, and an author list.   Unlike Texts, where any change makes it a different Text, an Article may be revised over time and still be considered the same Article (albeit a different version).  Usually only minor changes are socially appropriate, however. Consumers of credibility data may need to be cautious of which version an observation applies to.

If an article appears on a web page, or in a portion of a web page, we can use its URL to identify the article.

Differentiation between Article and Text. Consider whether the signal data would be the same if the text were moved to a different article, perhaps published in a different venue, with a different title, at a different time, and with other text before or after it in some article. If the observation would be the same, then the signal is a property of the text, not the article. In that case it be in 3. Subject type: Text not here. 

7.1 Originality

7.1.1 Signal: Is Original

Key

Proposed Template Statement

By

1

Text of [subject article] has appeared in exactly the same words or very similar words in another publication.

Credibility Coalition

7.1.2 Signal: Attribution of Non-Original Content

Key

Proposed Template Statement

By

1

[subject article] is not original content, and it includes accurate attribution, pointing to the original.

Credibility Coalition

7.2 Rhetoric

Issue: Are these really about the article, or at they about the text? Would the observations be the same if the same text appeared in a different article?  Unclear.  Leave in Article for now.

7.2.1 Rhetoric Proportionality

7.2.2 Signal: Proportional Rhetoric

Key

Proposed Template Statement

By

1

The rhetoric used in [subject article] is proportional to the event or situation described.

Credibility Coalition

7.2.3

7.2.4 Signal: Extreme Exaggerating Rhetoric

Key

Proposed Template Statement

By

1

The rhetoric used in [subject article] is an extreme exaggeration of the event or situation described.

Credibility Coalition

7.2.5 Signal: Extreme Minimizing Rhetoric

Key

Proposed Template Statement

By

1

The rhetoric used in [subject article] is an extreme exaggeration of the event or situation described.

Credibility Coalition

7.2.6 Signal: Calibrating Confidence - Level of Confidence

Key

Proposed Template Statement

By

1

[subject article] acknowledges uncertainty or the possibility that things might be otherwise

Public Editor

7.2.7 Signal: Emotionally Charged Tone

Key

Proposed Template Statement

By

1

[subject article] has an emotionally charged tone (i.e, outrage, snark, celebration, horror, etc.).

Public Editor

7.2.8 Signal: Emotional Valence

Could be measured by VADER (Valence Aware Dictionary and sEntiment Reasoner) Natural Language Processing library

Key

Proposed Template Statement

By

1

The language is extremely negative.

Credibility Coalition

7.2.9 Signal: Emotional Valence

Key

Proposed Template Statement

By

1

The language is extremely positive.

Credibility Coalition

7.3 Logic/Reasoning

7.3.1 Signal: Appeal to Fear Fallacy

Key

Proposed Template Statement

By

1

Does the author exaggerate the dangers of a situation and use scare tactics to persuade (the appeal to fear fallacy)?

Public Editor

7.3.2 Signal: Causal Claim Types

Key

Proposed Template Statement

By

1

Is a general or singular causal claim made? Highlight the section(s) that supports your answer.

Public Editor

  • 7.2.2.1 General Causal Claim
  • 7.2.2.2 Singular Causal Claim
  • 7.2.2.3 No Causal Claim

7.3.3 Signal: False Dilemma

Key

Proposed Template Statement

By

1

Does the author present a complicated choice as if it were binary (construct a false dilemma)? [If so, highlight the relevant section(s).]

Public Editor

7.3.4 Signal: Slippery Slope Fallacy

Key

Proposed Template Statement

By

1

Does the author say that one small change will lead to a major change (use a slippery slope argument)? [Highlight the relevant section(s).]

Public Editor

7.4 Outbound References

7.4.1 Signal: Source Types

Key

Proposed Template Statement

By

1

Which of the following types of sources are cited in the article? Check all that apply. If Other, please highlight.

Credibility Coalition

  • None
  • Experts
  • Studies
  • Organizations
  • Other

7.4.2 Signal: Contains Link to Scientific Journals

Key

Proposed Template Statement

By

1

Is a link provided in the article to where the original content came from?

Credibility Coalition

7.4.3 Signal: Accuracy of representation of source article

Key

Proposed Template Statement

By

1

Does this article properly characterize the methods and conclusions of the original source?

Credibility Coalition

7.4.4 Signal: Academic Journal Impact Factor

Key

Proposed Template Statement

By

1

What is the impact factor of the journal or conference cited?

Credibility Coalition

8. Subject type: Title

Also called: Headline

A Title is an immutable association of an Article and some short Text which is typically presented first. When a Title text changes, that's a different Title. An article may have many different titles at different points in time and in different contexts, although social practice is usually against this.

The same title text may used for many different articles, but those are considered different Titles here.  For example, many of the signals about a Title with the text, “Local man dies”, will depend on which Article it's associated with.

8.1 Quality

These are general quality signals, not containing many details about why something might be high or low quality.

8.1.1 Clickbait

A measure of how much the title of the article conforms to a predetermined set of clickbait genres. See also specific signals below that might be considered kinds of clickbait, like “Listicle”.

8.1.1.1 Signal: Listicle

Key

Proposed Template Statement

By

1

Title is a listicle (“6 Tips on …”)

Credibility Coalition

8.1.1.2 Signal: Cliffhanger

Key

Proposed Template Statement

By

1

Title is a cliffhanger (“You Won’t Believe What Happens Next”, “Man Divorces His Wife After Overhearing This Conversation”)

Credibility Coalition

8.1.1.3 Signal: Provoking emotions

Key

Proposed Template Statement

By

1

Title provokes emotions (“...Shocking Result”, “...Leave You in Tears”)

Credibility Coalition

8.1.1.4 Signal: Provoking emotions

Key

Proposed Template Statement

By

1

Title provokes emotions (“...Shocking Result”, “...Leave You in Tears”)

Credibility Coalition

8.1.2

8.1.2.1 Signal: Curiosity Gap (Hidden Secret or Trick)

Key

Proposed Template Statement

By

1

Title creates a curiosity gap through a hidden secret or trick (“Fitness Companies Hate Him...”, “Experts are Dying to Know Their Secret”, “You’ll never guess…”)

Credibility Coalition

8.1.3

8.1.3.1 Signal: Challenge to the Ego

Key

Proposed Template Statement

By

1

Title contains challenges to the ego (“Only People with IQ Above 160 Can Solve This”)

Credibility Coalition

8.1.3.2 Signal: Defying Convention

Key

Proposed Template Statement

By

1

Title defies convention (“Think Orange Juice is Good for you? Think Again!”, “Here are 5 Foods You Never Thought Would Kill You”)

Credibility Coalition

8.1.3.3 Signal: Inducing Fear

Key

Proposed Template Statement

By

1

Title induces fear (“Is Your Boyfriend Cheating on You?”)

Credibility Coalition

8.2 Misleading about content

8.2.1 Signal: Title Representativeness

Key

Proposed Template Statement

By

1

Title is representative of the content of the article.

Credibility Coalition

See also specific signals below that encapsulate what might be considered unrepresentative.

8.2.1.1 Signal: Differs from body topic

Key

Proposed Template Statement

By

1

Title differs from the primary content of the article body.

Credibility Coalition

8.2.1.2 Signal: Emphasizes different information than the body topic

Key

Proposed Template Statement

By

1

Title emphasizes different information from the primary content of the article body.

Credibility Coalition

8.2.1.3 Signal: Carries little information about the body

Key

Proposed Template Statement

By

1

Title carries little information about the primary content of the article body.

Credibility Coalition

8.2.1.4 Signal: Takes a different stance than the body

Key

Proposed Template Statement

By

1

Title takes a different stance from the primary content of the article body.

Credibility Coalition

8.2.1.5 Signal: Overstates claims or conclusions in the body

Key

Proposed Template Statement

By

1

Title over states claims or conclusions from the primary content of the article body.

Credibility Coalition

8.2.1.6 6 Signal: Understates claims or conclusions in the body

Key

Proposed Template Statement

By

1

Title understates claims from the primary content of the article body.

Credibility Coalition

8.3 Misleading about the world

8.4 Nonmisleading consumer manipulation

8.4.1

9. Subject type: Web Page

9.1 Layout

Issue: Should this be a Heading1 like Title?  Probably no, because the statements naturally get phrased with the subject of the statements being a web page. Most people wouldn't conceptualize the page layout as its own entity.

9.1.1 Signal: Framed with navigation

Also called: topnav, sidenav, framenav

Key

Proposed Template Statement

By

1

[subject] has obvious navigation elements at one or more edges of the content, providing a way to reach other content on the same website

Sandro Hawke (as example)

2

[subject] has a prominent top or side menu structure or buttons or links, taking user to other parts of site

Sandro Hawke (as example)

(Consensus discussion including benefits and risks goes here)

(External data from studies and implementation reports gets inserted here, matched by heading text, “also called” text, and the template text.)

9.2 Typefaces

9.3 Metadata in transmission headers

9.4 Metadata in page head

9.5 Metadata inline in body

10. Subject type: Website

10.1 Advertisements

10.1.1 Signal: Ads.txt Exists

Key

Proposed Template Statement

By

1

The domain contains an ads.txt file.

Credibility Coalition

10.1.2 Signal: Spam or Clickbait Advertisements

Key

Proposed Template Statement

By

1

The page of the article has spammy or clickbaity advertisements. This is limited to a subjective assessment at this time.

Credibility Coalition

10.1.3 Signal: Number of Advertisements

Key

Proposed Template Statement

By

1

Number of ads that appear on the article page. This includes display ads, content recommendation engines, sponsored content and call for social sharing

Credibility Coalition

11. Subject type: Aggregation

Includes: news feed, content portal, site using syndicated content

 

An aggregation is a collection of content from other providers.  As such, attribution and related trust issues require special consideration.

12. Subject type: Venue

A Venue is a branded content channel, which might be separable from the provider of that channel.  For instance a particular newspaper’s “Lifestyle” and “Sports” sections would typically be considered distinct Venues with distinct reputations. In this case, they would be sub-Venues of the newspaper itself.

The distinction between Venue and Provider is not always clear in people’s mind; it is essentially the distinction between a brand and the brand’s owner.  We try to make the distinction in order to be able to understand the impacts on reputation when, for example, one company sells a content brand to another company.

13. Subject type: Provider

14. Subject type: Creator

Also called: author, writer, reporter, byline

A Creator (in this context) is a Person is a person who creates content, such as by writing articles. All the signals which apply to a Person also apply to a Creator.  Signals are listed here if they only really make sense for content creators.  Even if every Person was a Creator, this grouping could still be convenient.

15. Subject type: Person

16. Subject type: Organization

If you make or suggest any edits to this document, you are agreeing to the W3C Community Contributor License Agreement which has significant copyright and patent implications.

Please also read 1.9. Instructions for editing this document before making any changes.