A group of former and current employees from OpenAI and Google DeepMind have published an open letter on righttowarn.ai, expressing grave concerns about the potential risks posed by artificial intelligence (AI) technology. The signatories include five former OpenAI employees, one current and one former employee from Google DeepMind, four unnamed current OpenAI employees, and two unnamed former OpenAI employees.
The group warns that AI risks range from exacerbating existing inequalities to the spread of misinformation, and the potential loss of control over autonomous AI systems, which could lead to human extinction. They emphasize that both AI companies and governments worldwide have acknowledged these dangers.
"AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts," the letter states. "We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public."
The signatories argue that AI companies have significant financial incentives to resist effective oversight and possess critical nonpublic information about the capabilities and risks of their systems. They cite the lack of transparency regarding the adequacy of protective measures and risk levels associated with AI advancements.
Daniel Kokotajlo, a former researcher in OpenAI’s governance division and a member of the group, voiced his concerns to the New York Times. He criticized OpenAI’s aggressive pursuit of artificial general intelligence (AGI), labeling the race as "reckless." Kokotajlo previously estimated AGI might be achieved by 2050 but now believes there is a 50% chance it could arrive by 2027. He also asserts there is a 70% probability that advanced AI could cause catastrophic harm to humanity.
The group stresses that confidentiality agreements limit their ability to raise alarms publicly, and current whistleblower protections are inadequate as they focus on illegal activities, whereas AI remains unregulated.
OpenAI has responded to the letter, agreeing with the call for government regulation and highlighting its ongoing efforts to engage with policymakers globally. The company claims a strong track record of not releasing technology without necessary safeguards, underscoring the critical importance of safety given its extensive use by Fortune 500 companies.
"We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," an OpenAI spokesperson stated. "We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society, and other communities around the world."