Six Months of ARTT in 2024: A Progress Report

July 12, 2024

July already, and we’re already halfway through 2024! Having spent the past few days enjoying a waste of gunpowder and sky, our team is back at work building the various elements of the ARTT project. It’s a good moment to reflect on our work, especially over the past six months, and to look ahead.

To recap: The ARTT team entered 2024 having already shared our ARTT Response Model, 10 suggested modes for good online communication, and an accompanying software that helps guide public communicators in difficult environments about what they might say and how they might say it.

Last summer and early fall, we tested the tool with a large group of public health communicators. So, at the start of this year, we were using that feedback to build the next version of the online ARTT Guide software. Because part of our Guide uses generative AI tools to make suggestions, at the start of 2024 we began to think really hard about ethical uses of this technology. And although we had already been working on a training curriculum, as 2024 went on we were newly thinking about what this might look like for people working in charged environments other than public health, such as elections.

Here’s our progress:

Building for a public release of our ARTT Guide

On the software side of things: Using last year’s feedback from testers, we redesigned the ARTT Guide to make it easier and more intuitive for users. We’re nearing the completion of the first version of the tool that will be made available for public use on a limited basis this fall. Our goal is to launch a fuller-featured second version of the tool, designed for organizational use, by the end of this year.

On the research side, we continue to refine and expand our ARTT Response Model, taking into account more ways of thinking about response, and starting to factor in ways to take into account the identity of the person you’re responding to, and your own goals in your response. Another version of this Model is due to be released in the next month or two, and these revisions and expansions of the Response Model will be incorporated into the ARTT Guide software.

AI Working Group

In February, we announced with the National Public Health Information Coalition (NPHIC) the creation of an Ethical Use of AI in Public Health Communications Working Group. The working group is currently developing a set of practical guidelines or best practices for public health communication professionals. These new guidelines will encompass different AI technologies and actual use cases for communicators. The working group aims to share a draft of the guidelines for open feedback in 2024.

Looking towards elections

In order to address challenges the local election officials (LEOs) across the United States have in engaging with the public in increasingly contentious conditions, with funding support from the Bipartisan Policy Center under a grant program advised by the Center’s Election Workforce Advisory Council, we have begun conducting a pilot project aimed at helping these officials better connect and communicate with voters. In collaboration with the North Carolina State Board of Elections (NCBSE), we are building ARTT-LEO, a training curriculum intended to help build election officials’ capacity to engage and communicate with the public. More partnerships of this kind will be announced soon.

On the road: Meet us in Detroit!

A goal for the second half of the year is to continue to talk to and learn from everyone we can. As a starting point, we’ll be attending the NACCHO 360 conference in Detroit from Monday, July 23rd to Friday, July 26th.

If you’re also planning to be there, be sure to stop by our booth in the exhibition hall and say hi!

Until then, ARTT.

Help the ARTT team reach more people!

If you like this blog post, help us reach more people by sharing it with a colleague or a friend who might be interested in discussing how to create opportunities for trusted conversations online. You can also share this link to subscribe to our newsletter.