This is the second of two blog posts exploring the ethical AI guidelines for public health communicators. Read our first blog post here.
In our previous blog post we shared that, since the start of 2024, members of the Ethical Use of AI in Public Health Communications working group have been collaborating on developing a set of practical guidelines. These contain suggested practices for public health communicators who are considering incorporating AI into their work.
Over the summer the working group has been developing preliminary guidelines, which were initially developed from case studies, current research in the field of AI, and the practical needs of public health departments. The working group is now ready to share the draft guidelines with other public health communicators and their organizations for feedback.
We have an evolving approach to creating the guidelines because we know that AI will continue to change, and in turn, the guidelines will need to evolve to respond to developments. This is why we see what we’re building as the first of several draft iterations and an evolving framework that the working group will continue to revisit and refine into 2025 and beyond.
In addition, we know that the guidelines need to be adaptable to different contexts. One of the most insightful takeaways so far from speaking with the working group, which includes more than a dozen members and advisors from across diverse backgrounds in public health communications is that since every organization has a unique structure, the guidelines need to be flexible to accommodate this.
(Finally, we wanted to make something practical as well as allows for organizations to come up with their own logic/understanding of how to implement ethical considerations with it comes to AI. )The goal of the guidelines is not to create a set of strict rules everyone has to follow. Rather, it is to propose a set of challenges and principles that one could consider when thinking about using AI in public communications.
The intent of drafting these preliminary guidelines is that organizations use them to create their own set of policies. The guidelines provide examples of what those policies could be, but it is ultimately up to the organization on how they wish to address ethical considerations.
The guiding principles that the working group has currently come up with are: protecting the public, establishing accuracy, centering human judgment, ensuring community health, and keeping up to date with AI technology (as well as the public’s view of it).
This principle is built on the fact that the primary goal of public health communication is to inform the public and assist the public’s health and well-being. This includes maintaining the public’s trust, being transparent in AI usage, and prioritizing the public’s privacy.
Ensuring health information is accurate, timely, and complete is crucial. Public health communicators are responsible for the information and recommendations they communicate, regardless of the use of AI. Communicators must diligently review all information, regardless of its source, and remain vigilant about potential biases in AI tools or information sources. This includes reviewing any text generated by AI.
Regardless of AI use, the role of humans in communicating health information is central. A generative AI tool may create a helpful vaccine announcement, but humans are the ultimate judges, evaluators, and authors of any message. This includes ensuring that the information is appropriate for the organization and making sure to cite the information and the creative works.
It is vital that we strive to communicate in a way that is fair and just, to avoid harming the recipients of the message. For example, what is generated by AI may only be true of a majority group. Or, it may provide diversity in the information but humans are needed to assure it is contextualized. Considering the obligation to provide opportunities for all people to achieve their highest health upfront in the implementation of AI tools can help ensure these tools are applied in fair and just ways while also avoiding pitfalls of bias and exclusion.
Communicators using AI must both train their technical competence and keep up to date with relevant AI tool updates, as well as public sentiment and trust towards AI usage in communication. Organizations should provide resources for ongoing training and updating of AI-related information.
So, what do you think? We want to learn what public communicators and their organizations think of AI in communication generally, what you like or dislike about our guiding principles, and the degree to which you would trust communicators following a set of policies that address the guiding principles. Members of our working group will seek additional information/feedback from various stakeholders, such as the members of their organizations. Be on the lookout for requests to participate in feedback sessions from your local rep!
Also some of these stakeholders we’ll seek input from are representatives of the general public. As these guidelines are centered in public health communicators’ responsibility to the public, we think it vital to obtain feedback from them.
Finally, for folks not attached to the working group, We also want to hear from you! If you would like to share your thoughts about the principles listed above or would like to help run a session in your community around a full draft of the guidelines, please fill out this form.
We are looking forward to hearing your thoughts and feedback on this first version of the guidelines and we will continue to update you as the guidelines progress!