Experts Warn AI Could Pose Pandemic Risk; Urge Government Oversight

Experts Warn AI Could Pose Pandemic Risk; Urge Government Oversight

Researchers from Johns Hopkins University, Stanford University, and Fordham University have issued a stark warning about the potential risks of advanced artificial intelligence (AI) in a recent paper published in Science. The experts caution that AI models, which are increasingly being used to handle substantial biological data and accelerate innovations in drug design and vaccine development, could also inadvertently contribute to the creation of dangerous pathogens.

The paper highlights that while current AI models are not yet posing significant biological risks, the technology is advancing rapidly. "The same AI models designed for benign purposes, such as delivering gene therapy, could be misused to engineer more pathogenic viruses capable of evading immunity," the authors note. They advocate for proactive legislative measures to mitigate these risks before they become a reality.

The researchers propose that national governments, including the United States, enact legislation and establish mandatory protocols to regulate biological AI models. They recommend implementing a rigorous testing framework to evaluate the safety of these models before their public release. "Voluntary commitments alone are not sufficient," the paper asserts. "Structured government oversight will be essential to prevent the development of AI models that could lead to major epidemics or pandemics."

Anita Cicero, deputy director at the Johns Hopkins Center for Health Security and one of the paper's co-authors, emphasized the urgency of the issue. "We need to plan now," she said, warning that without proper oversight, the risk of advanced biological AI models could become a significant threat within the next two decades, or even sooner.

Paul Powers, CEO of Physna and an AI expert, echoed these concerns, noting that the rapid pace of AI development often outstrips regulatory frameworks. "AI is advancing faster than most people are prepared for," Powers told Fox News Digital. He highlighted the challenge of enforcing regulations on a technology that evolves so quickly, pointing out that both individuals and small businesses now have access to powerful AI capabilities.

Powers suggested that regulation might need to focus initially on controlling access to fundamental building blocks of biological models. "Starting with stricter controls on who can access the essential nucleic acids could be a prudent first step," he said.

The paper's authors emphasize that while voluntary safety measures are valuable, they must be complemented by robust government intervention to manage the potential dangers posed by advanced biological AI technologies.