OpenAI, led by CEO Sam Altman, announced on Thursday that it had successfully disrupted five covert influence operations that exploited its artificial intelligence models for deceptive activities across the internet. These operations, identified over the past three months, involved actors from Russia, China, Iran, and Israel, among others.
The threat actors utilized OpenAI’s models to generate a variety of content, including short comments, lengthy articles in multiple languages, and fabricated names and bios for social media accounts. The campaigns targeted a range of issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and political matters in Europe and the United States.
According to OpenAI, these operations aimed to manipulate public opinion or influence political outcomes. The firm’s report highlights ongoing concerns about the misuse of generative AI technology, which can produce realistic text, images, and audio swiftly and easily.
In response to these threats, OpenAI announced the formation of a Safety and Security Committee, led by board members including Altman, to oversee the training of its next AI model. The company emphasized that the deceptive campaigns did not benefit from increased audience engagement or reach through its services.
The disrupted operations combined AI-generated content with manually written texts and memes sourced from the internet, demonstrating a blend of automated and human tactics to deceive audiences.
In a related development, Meta Platforms revealed in its quarterly security report on Wednesday that it had detected likely AI-generated content used deceptively on its Facebook and Instagram platforms. This included comments praising Israel’s actions in the Gaza conflict, posted below content from global news organizations and U.S. lawmakers.
OpenAI’s proactive stance against the misuse of its AI models underscores the growing need for robust safeguards and oversight in the rapidly evolving field of artificial intelligence.