The International Journal of Strategic Communication published a study on artificial intelligence and crisis communication authored by Dr. Elizabeth Ray and Dr. Patrick Merle professors in the School of Communication, and Kaylin Lane, a doctoral student.
“Increasingly, professionals in communication are using AI to develop written statements,” Ray said. “We wanted to see if people would accept those messages, especially during times of crisis, after we saw an organization face extensive media backlash for using AI to generate a crisis response.”
Inspired by a real-world example with Vanderbilt University, the research team wondered if that media backlash had been tied to the disclosure label on the statement, which read “paraphrased by AI.” The study tested whether the use of AI (with and without a label to identify the message as AI-generated) affected message acceptance and organizational credibility.
To the team’s surprise, participants in the experiment did not necessarily mind if crisis responses were generated by and labeled as AI. Instead, they found that practitioners using AI messaging in times of crisis must consider ethical implications and should disclose that the messages are AI-generated, though further study is needed.
As Ray explained, this could be a relevant finding to the local community. “Don’t be surprised if you soon see messages generated by AI during an emergency here in Florida. In December, the Florida Department of Emergency Management announced it now has an AI-generated emergency messaging system, called BEACON. While it can undoubtedly be a real-time tool to distribute emergency messages to Floridians – particularly during hurricane season – I am hopeful that emergency managers consider our research and disclose which FDEM messages are generated by AI.”
To read the full research article, click here.