Analysis Framework: Online Misinformation Harm

How urgent is it to respond?

Last updated: September 22, 2022 (Version 1.0)

What factors can fact-checkers consider when making strategic decisions to prioritizing which post or message to address? The Online Misinformation Harm framework was developed in response to that need. Its aim is to help fact-checkers assign degrees of urgency to potentially harmful content, in order to identify which to prioritize first.

The framework describes five dimensions of potential urgency when it comes to handling misinformation as a harm. The five dimensions are:

  1. Actionability: The potential of action resulting from the content 
  2. Exploitativeness: The exploitativeness of the content’s intended audience
  3. LIkelihood of Spread: The likelihood of the content’s spread and exposure 
  4. Believability: The believability of the content’s information to the audience
  5. Social Fragmentation: The tendency towards social fragmentation within the content’s narrative. 

These five dimensions are each refined into a set of indicators, expressed as yes/no questions, which reflect factors that are currently understood to have an impact upon the misinformation’s magnitude of potential harm.

More about the research behind this framework, and the corresponding questionnaire, can be found within our working paper described below.

“Urgent: A Structured Response to Misinformation as Harm”

Working Paper, September 2022

Online misinformation is a major challenge for societies today. Beliefs in false claims about science, such as vaccine misinformation, can lead people to engage in harmful behavior that risks their own health. Such misinformed beliefs can also defeat public health measures that rely on collective compliance to protect society’s most vulnerable. Similarly, a belief in inaccurate or misleading narratives about topics such as vote-rigging or other supposed election interference can lower the public’s trust in democratic institutions, and in turn affect the level of participation in political activities such as voting, interfere with the peaceful transition of power, and even motivate political violence. 

Fact-checking is a critical activity when addressing misinformation. Fact-checking supports individual readers who seek good information, and also supports content moderation initiatives on larger scale platforms. However, fact-checking is laborious. The fact-checking process includes investigating claims, collecting convincing evidence that such claims are false or misleading, and then sharing that evidence out. With torrential volumes of user-generated content being created daily, it is impossible to fact-check every new article, post, message, or claim.

As a result, fact-checkers tasked with addressing online misinformation must prioritize what they choose to tackle every day. Given that prioritization is unavoidable, how should fact-checking efforts to combat misinformation prioritize what content to tackle? A working group of academics, non-governmental organizational researchers, and students based in the Social Futures Lab at the University of Washington’s Allen School of Computer Science and Engineering decided to explore this question. 

From interviews that the writers of this paper conducted with fact-checkers, we found that fact-checking processes are still young and not standardized as a field. Fact-checkers typically take a relatively ad hoc approach to prioritization, using individual judgment and case-by-base discussion with others. Could prioritization instead be achieved in a principled and systematic way? One way forward that we propose is via harm assessment

In applying a structured harm assessment to misinformation, we begin by making the observation that while all misinformation is harmful to some degree, not all misinformation is equally harmful. Following a literature review, and a series of interviews and workshops with fact-checkers and other misinformation experts, we identified major dimensions for assessment. 

Five dimensions — actionability, exploitativeness, likelihood of spread, believability, and social fragmentation — can help determine the potential urgency of a specific message or post when considering misinformation as harm. In addition, we conclude this paper by providing a checklist of questions to help determine a piece of content’s relative level of urgency within each dimension.

The dimensions and the questionnaire are intended as both conceptual and practical tools to support fact-checkers, content moderators, peer correction efforts, and other initiatives as they make strategic decisions when prioritizing their efforts to respond to misinformation that is spreading.

Download the full paper here.

A detailed version of the questionnaire alone can be found here, and feedback regarding the framework and questionnaire are welcome for submission at this Google Form.


Methodology and Acknowledgments 

The development of this framework was informed by existing research in the fields of misinformation, cyber-harms and hate speech, as well as by semi-structured interviews with professional fact-checkers. It is a joint effort between the ARTT project team and research partners tied to the UW Social Futures Lab.