Let’s Go Exploring!

December 20, 2024

It’s the time of year again when it’s natural to consider what we accomplished over the past twelve months, and possible directions to explore in the year ahead.

The ARTT team has indeed been busy in 2024. We:

  • Initiated ARTT Communications training: We released another version of our ARTT Communication Model, which aims to give people options for conversations that try to build trust. We also piloted a virtual training course based on our communication insights with election officials in North Carolina and California.

  • Prototyped the ARTT Communications software tool: We continued to develop our software tool, based on this communication model. Though we’ve found that the most recent prototype isn’t quite right, we are excited for the next version that now includes our own Artificial Intelligence (AI) augmentation.

  • Developed ethical and practical guidelines for AI usage: Speaking of AI, we have also been working on developing ethical guidelines for AI usage in public health with a thoughtful group of public health communicators and other advisors. We look forward to kicking off 2025 by releasing the first version of these guidelines in January.

  • Launched Discourse Labs: And finally, throughout 2024 we’ve been transitioning to our own 501c3, Discourse Labs, an independent applied research group focused on developing tools and resources for productive public dialogue.

Setting our sights on 2025

So, what’s in store for the coming year? Over the past few months we have also been engaged in some serious reflection about how Discourse Labs should move forward.

Most of all, we’d like to figure out how to become more useful, non-duplicative, and sustainable as an organization that aims to support productive public dialogue.

With these guiding principles in mind, here are three themes for 2025 we will be considering:

1) Based on the feedback from our training, there may be helpful work to be done in educating communication skills with different communities.

For example, in collaboration with Professor Monica Schoch-Spana at the University of Texas A&M - San Antonio, we’ll be testing a new ARTT communication course for students in public and community health in the spring.

We’ve also been collaborating with Amanda Yarnell of the Harvard T.H. Chan School of Public Health and Robert Jennings of the National Public Health Information Coalition to pilot ethical and practical training around AI usage in 2025 – following the development of those guidelines mentioned above.

And we’ll continue to think through how teaching election officials to connect and de-escalate with their constituencies could be helpful in advance of the midterm elections.

2) We could try to more directly support or improve the public’s ability to productively engage on topics of public interest.

What do I mean by this? I was reflecting on a puzzle as I listened to great presentations and discussions at a recent Bipartisan Policy Center Elections Summit. It comes down to a question: Is there a unique, and possibly powerful, role for civil society and nonprofit organizations like ours might play when it comes to fostering specific public conversations?

There are a variety of great initiatives that support general principles of communication with others – like this one or this – and we’ve learned from many of them. There are also a number of necessary efforts to understand what the public thinks about critical issues, often originating in academia.

There are also a lot of important efforts to connect with community organizations and influencers and asking them to amplify the work of public communicators around, for example, health or election information. But what’s harder is for local community groups to find this information for themselves and to communicate concerns and questions in productive ways.

For example, how easy is it for citizens to find, learn, and ask about how their local election office ensures election integrity from people they trust and understand? Is supporting the public's ability to engage an area that could use more help from us?

3) Finally, I’ve been wondering if we can help create more room for ethical discussions around public conversation itself.

At the recent SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing (CSCW), where we presented our paper on misinformation harm, I had a lovely time conversing with an undergraduate computer science student who had been working lately on empathy research. She asked a terrific question: “Just because we can make AI more empathetic, should we?”

The question was rooted in an observation, which I’ll take the liberty of building out a bit more with my own thoughts: we know that millions of people are developing relationships with AI chatbots, a number which included the recent tragic suicide of a 14-year-old.

At the same time, the population of the world just keeps on growing – we reached 8 billion in late 2022. In the U.S., in 2023 the surgeon general pointed to the deep loneliness and isolation that many Americans feel.

This leads to a question: Are we appropriately investing our energies and resources in the hard work of helping humans better connect to one another rather than to machines?

Maybe it’s not a bad idea to have a slightly non-empathic chatbot, in other words, if it helps to remind us of the difference.

In Conclusion…

So, those are our thoughts about potential directions for the ARTT team – and Discourse Labs – to pursue in 2025. The coming year stands before us like a blank piece of paper, or a field of fresh snow, much like the final panel of an iconic Calvin and Hobbes cartoon some of you may remember.

If you have thoughts to share — including your own responses to the questions around AI empathy — please let us know at hello@discourselabs.org. In the meanwhile, we are sending our best wishes to everyone for a peaceful and joyful holiday season.