“Ultimately, if you’re not actively talking about AI now, you’re at risk of not being able to catch up.”
— Lori Tremmel Freeman
Public health workers have been using computers to sort through large datasets for disease patterns and early warning signals for decades. But new artificial intelligence tools could supercharge that ability, uncovering patterns faster and more accurately.
AI — technology that enables computers to learn from experience and do tasks typically associated with human intelligence — already has an extensive footprint in the health care sector, from patient diagnosis to new drug development. Adoption has been slower in the public health field, though use spiked during the first years of the COVID-19 pandemic.
The technology is already being put to use to boost a number of core public health functions — in some cases, deployed years before the pandemic — including disease surveillance, outbreak forecasting, health education and disease prevention. But its role is still emerging in the field, as are ethical guardrails on its use.
“I think (AI) holds a lot of promise for public health,” said APHA member Lori Tremmel Freeman, MBA, CEO at the National Association of County and City Health Officials. “Ultimately, if you’re not actively talking about AI now, you’re at risk of not being able to catch up.”
At the federal level, the Centers for Disease Control and Prevention has used AI for a range of activities, noting its ability to not only process huge amounts of data, but also content such as images or handwritten doctor’s notes.
Examples of CDC’s work include using AI to improve the speed and accuracy of tuberculosis surveillance; to analyze massive amounts of free text for signals of COVID-19 vaccine safety issues; and to boost response to Legionnaires’ disease by using satellite images to automatically detect cooling towers, which can spread Legionella bacteria. At the local health department level, Freeman said AI could be a particular game-changer for predictive analysis, helping responders detect and contain disease outbreaks faster.
AI-enabled chatbots have been used for COVID-19 education by state and local health agencies. In Dallas, for example, officials used an AI-powered chatbot early in the pandemic to answer residents’ questions about the disease. Health departments in Charlotte, North Carolina, and Boston deployed AI-supported chatbots to answer questions about COVID-19 vaccines. In 2022, the California Department of Public Health launched a chatbot aimed at combating misinformation about COVID-19.
Last year, the Association of State and Territorial Health Officials surveyed members of its Informatics Directors Peer Network about AI use. More than one-third of respondents said AI was currently being used in their health departments, both officially and informally. Primarily, AI was being used for content generation, such as drafting reports or communications or producing programming code.
“There’s lots of interest, but use is really just beginning,” Freeman told The Nation’s Health. “People are starting to experiment with it.”
Generative AI tools that produce text and images, such as ChatGPT, could make a real difference for short-staffed health agencies and high rates of worker burnout, said John Brownstein, PhD, a professor at Harvard Medical School and chief innovation officer at Boston Children’s Hospital.
“Ultimately, the individual is responsible for the output,” he told The Nation’s Health. “But once you start using these tools, it’s hard to go back.”
Brownstein has long led work to use machine learning — a form of AI — to improve infectious disease surveillance. In 2006, he co-founded HealthMap.org, which uses machine learning to search through Internet text, such as local news and social media sites, for real-time disease information. The mapping tool can tease out relevant data points from the clutter of information on the web and, possibly, reveal disease clusters that traditional surveillance misses.
In 2019, HealthMap warned of a “cluster of pneumonia cases of unknown etiology” just days after the first COVID-19 case was identified, according to a 2023 article in the New England Journal of Medicine that Brownstein co-authored. During the COVID-19 pandemic, Brownstein and his colleagues used the tool to track the virus’ spread.
Brownstein said the new generation of AI tools offers a “tremendous” opportunity to analyze data that might be too challenging to tap into otherwise. But it still needs human involvement and oversight.
“AI cannot replace the cross-jurisdictional and cross-functional coordination that is truly essential for the collective intelligence required to fight novel and emerging diseases,” he co-wrote in the NEJM article.
Left on their own, research shows AI-enabled tools could worsen inequities, since they learn from biased data. For example, in a study published in 2023 in Digital Medicine, researchers asked generative AI tools, such as ChatGPT and Google’s Bard, a list of medical questions. Frequently, the tools responded with debunked, racist answers. A landmark 2019 study, published in Science, showed that an AI algorithm used to assess medical needs for millions of patients was biased against Black people.
A number of ethical recommendations have been issued on the problem. In January, the World Health Organization released guidance on the ethics and governance of large multi-modal models — a type of generative AI that analyzes multiple types of data, including text, photos, audio and video. Shortly before WHO’s release, a federal panel convened by the U.S. Agency for Healthcare Research and Quality and the National Institute on Minority Health and Health Disparities published its principles for addressing the impact of algorithm bias on racial and ethnic health disparities.
More binding actions are happening, too. In 2022, for example, federal health officials proposed updated nondiscrimination rules that address bias in clinical algorithms. Across the country, governors — from California to Pennsylvania to Alabama — are signing executive orders to study the risks and benefits of AI.
Kenneth Goodman, PhD, director of the University of Miami’s Institute for Bioethics and Health Policy and contributor to WHO’s ethical guidance, said even with plenty of ethnical frameworks available, “thoughtful, evidence-based” legislation is still needed to ensure AI is wielded safely.
“Right now, we have the perfect storm of big data and powerful computing,” Goodman said. “But the issue is still the same: What is appropriate use and who is the appropriate user?”
AI and limited understanding about how it makes decisions could undercut credibility and trust — core currencies in public health, Goodman noted, making transparency about its use paramount. But even with clear risks, Goodman said AI also offers clear benefits.
“We’re not doing nearly enough to find out what (AI) is good for and find out what it’s not good for,” he said. “It’s still quite early in the morning on what could be a very long and exciting day.”
Some research is showing glimpses of what is possible for public health.
A few years before COVID-19, AI researchers and social workers in Los Angeles teamed up to promote condom use and HIV prevention among youth who were experiencing homelessness. They developed an algorithm to pinpoint the most influential people in the community to train as peer educators, using AI to analyze complex networks of potential connections.
In a three-year study involving more than 700 youth, results showed the AI-supported intervention yielded statistically significant reductions in unprotected sex, while the non-AI interventions did not. Milind Tambe, PhD, co-author of the study and director of the Center for Research on Computation and Society at Harvard University, said AI can be especially useful for optimizing limited resources, whether it is peer educators or public health budgets.
Tambe, who directs AI for Social Good at Google Research, said he also noticed an increase in interest in public health among AI researchers during the pandemic. But that has since waned. He urged public health organizations to keep engaging and educating AI experts on ways they can help.
“My main message is partnership,” said Tambe, an APHA member. “When I speak with students in AI, there’s so much interest in doing something good with all that knowledge, they just aren’t sure what to do.”
For more information, visit bit.ly/cdcai.
- Copyright The Nation’s Health, American Public Health Association