At the beginning of 2024, the ARTT project launched a survey (feel free to participate!) that asks public health communicators about their views on using generative AI tools in their communications. So far, approximately 20 public health communicators from across the United States have completed the survey, generating a wide range of responses.
Some survey respondents said they have been motivated to adopt AI, while other public health communicators reported feeling hesitant and concerned about this emerging technology. For example, one survey respondent stated, “I would like to stay current with AI trends because I think if we don't embrace AI, then public health will fall behind.” In contrast, another participant reported, “I am concerned about accuracy and the devaluing of public health communicators. I feel that it takes away from communications specialists, artists, editors, etc.”
The survey responses other included other important concerns, many of which focused on the ethical aspects of using AI. The health communicators who responded to our survey had questions about what using AI would do to the accuracy of their communications, or how the use of AI would impact their audience’s trust in them. Respondents also expressed worries about the implicit racial bias found in many AI tools.
Anticipating these concerns, earlier this year the ARTT project also partnered with the National Public Health Information Coalition (NPHIC) to launch the Ethical Use of AI in Public Health Communications Working Group. Members of the working group represent and bring insights from a broad spectrum of public health communications in the United States and its territories. In addition, the working group will have the support of advisory members who are able to offer additional perspectives. You can find the list of members and advisors here.
The goal of the working group is to develop a set of draft guidelines and best practices encompassing different AI technologies and actual public health communication use cases by late 2024.
As the working group discusses how to design these practical guidelines throughout the year, we are relying on the following guiding inquiry:
“What are acceptable, ethical, and appropriate uses of Artificial Intelligence (AI) in public health communications?”
One of the first ways the working group has approached this guiding inquiry was considering which parts of AI tools pose a challenge to using them ethically in professional communication. The working group then developed the following guides to represent the challenges presented by working with generative AI tools:
While AI can create whole documents in split seconds, it can also create sources and “facts” out of thin air. However, for topics that involve public health, information must be sound and accurate to foster public understanding, support, and compliance with guidance and recommendations.
At the same time, large-scale models to date have not escaped the difficulties around human bias such as stereotypes: will the use of AI exacerbate these issues?
Public health departments communicate with authority and expertise, no matter what technology is being used. Using AI-authored information should not, then, call into question whether the ultimate author is the public health department or organization, versus the AI itself, or a collection of several unidentified sources.
Does the use of AI in fact affect how the public perceives the authority of health organizations?
The issue of trust between health communicators and the public is more important than ever. Though AI promises to improve the efficiency and scale of engagement, it is not clear that this technology helps communications to feel genuine.
For example, how much of a messaging campaign or even basic message must be created or edited by a public health professional, for the information to seem authentic from the organization versus a “bot”?
AI seems to promise a revolution for the entire practice of health communications. It is not clear, however, that new AI technologies offer all individuals – regardless of ability – the same opportunities.
If not, what considerations should be made?
Informed by our guiding inquiry, one important consideration the AI Working Group started with is that there are in fact different uses of AI in communications. For example, and perhaps the first use case that comes to mind for many people, is using generative AI tools to produce communications – one can simply enter a topic and ask the AI tool to generate content to share with the public.
We asked working group members to explore this particular application of AI tools by considering the following case study:
Is it ethical for a public health communicator to use generative AI to create a one-page explainer on Mpox for the fifth-grade reading level?
In the following discussion, AI working group participants talked about the importance of thoroughly checking everything produced by the AI:
The case study also sparked the beginning of a working group discussion about when it is important and ethical to disclose the use of AI.
So far this year, the working group has discussed a variety of similar case studies, ranging from limiting AI use to checking spelling and grammar for public-facing communications, to using generative AI to build a health communications chatbot that would interact autonomously with the public. It was through these case studies we began developing important ethical considerations and challenges.
As our working group meetings have progressed so far in 2024 and we have worked through various case studies, our understanding of the challenges and complexity of our guiding inquiry has grown. It’s important to note that the working group’s guiding inquiry may continue to shift as we get more feedback from our working group members and the public, and as technology rapidly advances.
This summer, the group is currently collaborating on the first draft of a set of guidelines. In our next blog post later this month, we will go into more detail about our current draft of practical guidelines and best practices for using AI in public health communications, and how we plan to update these guidelines going forward.
Help the ARTT team reach more people! If you like this article, help us reach more people by sharing it with a colleague or a friend who might be interested in discussing how to create opportunities for trusted conversations online. You can also share this link to subscribe to our newsletter.