The project brings together insights from research fields such as computer science, social science, media literacy, conflict resolution, and psychology, in addition to practitioners from communities focusing on health-related communications in journalism, vaccine safety, and Wikipedia.
Our main tool, the ARTT Guide, is a Web-based software assistant that provides a framework of possible responses for everyday conversations around tricky topics. By bringing together insights about how to engage in positive exchanges around credible information, and by providing guidance, encouragement, and inspiration, the ARTT Guide will help our users answer the question: “What do I say and how do I say it?”
For example, we noticed that one potentially contentious exchange around vaccines in a Reddit subreddit ended respectfully after the speakers engaged in listening, empathizing with different audiences, and taking the perspective of other people. (Wouldn’t it be nice if all conversations went like this?)
We aim to combine current expert guidance and encouragement through a software assistant so that people can combine ARTT’s insights with their own local expertise and experience as they strive to have better online conversations.
What is the goal of ARTT?
ARTT aims to support trust-building online conversations by addressing the fundamental obstacles presented by social media platforms and online communication. In the middle of online exchanges that are about inaccurate or difficult-to-understand information, how you might respond depends not only on your understanding of the information, but the goal of your conversation as well.
ARTT wants to help the people doing this challenging work. On top of the intellectual difficulty of trying to find the right information or trying to figure out how to respond, it is also emotionally and psychologically draining to participate in tricky online conversations. People who are motivated to respond to these situations — amateur volunteers, members from online communities like Wikipedia, health communicators, journalists, and librarians — are deeply fatigued and need help.
ARTT offers practical responses to improve online conversations. What solutions do experts have to offer? And can lessons from academics and experienced practitioners be boiled down for easier use? On the other hand, what ways are everyday people already proving successful at trust-building/worthy online conversations? Distilling these possibilities for those navigating conversations is what the Analysis and Response Toolkit for Trust is trying to do.
ARTT provides inspiration. These motivated community members, experts and amateurs alike, are modeling the kinds of conversations that are needed in a democracy: Open, respectful exchanges about factual information on issues that touch our day-to-day lives. So, we also collect and provide examples of these public exchanges to remind ourselves of what is possible and to help imagine what better online exchange can look like.
Is ARTT specifically focused on questions around vaccines?
Yes. Although many principles are broadly applicable, the ARTT Guide is focusing first on providing resources for discussions around vaccines.
What makes ARTT’s approach unique?
Instead of focusing on winning arguments and shutting people down, the goal of the ARTT project is to help people build bridges and engage in productive online conversations. We aim to bring expert guidance and encouragement through a software tool so that people can combine it with their own local expertise and experience as they strive to have better online conversations.
Who will be using ARTT?
We hope that ARTT will be broadly useful! Most immediately, the ARTT project supports health communicators, educators, and other “superhero” responders who work to keep local, online communities more informed.
What are the components of ARTT?
The focus of the project is the ARTT Guide, a Web-based software tool that will provide insights into points of analysis and response during online conversations around complicated topics.
The ARTT toolkit includes relevant tips or guidance related to different types of response. Our aim is to empower our users in conversations with options for response, including correction, listening, empathizing, perhaps encouraging healthy inquiry.
The toolkit also includes the ARTT Catalog, a curated library of studies and reports from across research disciplines. The Catalog presents the latest findings from these disciplines about how to best engage in conversations around misinformation or other contentious topics in online spaces in trust-building ways.
What are responses, and why are they important?
Our aim is to empower users with options for response when they are in conversations in which they feel stuck. Research shows there are a number of possibilities to consider, such as listening, empathizing, encouraging healthy inquiry – or perhaps not responding at all.
What sources are you using for ARTT’s responses? What is your methodology based on?
ARTT’s guided responses are sourced from the latest research in psychology, conflict resolution, media literacy, and other fields. While our tool will offer suggestions on how to best correct information, it will also give users guidance on other response possibilities, including: Co-verify, de-escalate, empathize, encourage healthy inquiry, encourage norms, listen, share, or take perspective.
Does the ARTT Guide include Artificial Intelligence (AI)?
Currently, we are exploring the potential to use AI in ways that help provide users with language and writing options in crafting their responses. The options will not be automatic – however, AI-generated responses must first be selected, verified and edited by users, who ultimately decide what they want to say.
What is the timeline for ARTT?
Currently, the ARTT team will work with partners who will help test the tool, starting in May 2023.
After that, partner feedback will be incorporated to help refine the ARTT Guide tool, while additional outside feedback will be solicited to assess the tool for any cybersecurity issues.
We are aiming to release a public version of our tool in August 2024.
Who is building ARTT?
Hacks/Hackers and the Paul G. Allen School of Computer Science & Engineering at the University of Washington are ARTT’s lead organizations. During Phase II, which commenced in October 2022, partner and collaborating organizations include Wikimedia DC, Social Science Research Council (SSRC), Children’s Hospital of Philadelphia, National Public Health Information Coalition, and others. Advisement also comes from a member of the World Health Organization’s Vaccine Safety Net. Throughout Phase I, a variety of organizations including Wikimedia DC, MuckRock Foundation, and Social Science Research Council collaborated and partnered in the project.
Our team includes researchers, journalists, computer scientists, educators, democracy and conflict resolution specialists, Wikimedians, health science communicators, and others working on information reliability.
Who is on your team?
Connie Moon Sehat (Hacks/Hackers Researcher at Large) is the Principal Investigator (PI) for the ARTT project. Amy X. Zhang (Assistant Professor at University of Washington’s Allen School) and Franziska Roesner (Associate Professor, Allen School), serve as co-Principal Investigators (co-PIs). Kate Starbird (Associate Professor in the UW Department of Human Centered Design & Engineering, and Director of the UW Center for an Informed Public), Tim Althoff (Assistant Professor in the Paul G. Allen School of Computer Science & Engineering), and Tanu Mitra (Assistant Professor, UW Information School) all participate as senior personnel on the project.
How is ARTT funded?
Building on National Science Foundation funding and support during Phase I, in October 2022, Hacks/Hackers, the Paul G. Allen School of Computer Science & Engineering, and partner organizations received a $5 million award from the National Science Foundation’s Convergence Accelerator. The award supports Phase II development of the Analysis and Response Toolkit for Trust (ARTT).