Generative AI, including tools like ChatGPT and DALL-E, is anticipated to complicate the landscape of the upcoming 2024 US elections, warns Nathan Lambert, a machine learning researcher at the Allen Institute for AI.
Lambert, co-host of The Retort AI podcast, predicts that the application of AI in politics will create a challenging environment, irrespective of whether the origin is attributed to campaigns, malicious actors, or companies such as OpenAI. Amid this, he anticipates a slowdown in AI regulation efforts in the US due to the elections being a focal point.
Concerns over the use of AI in political campaigns are already surfacing.
Despite the 2024 US Presidential election being nearly a year away, reports indicate that the deployment of AI in political campaigns is causing apprehension. A recent ABC News report highlighted the use of AI-generated content in Florida Governor Ron DeSantis' campaign, featuring images and audio of Donald Trump. A poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy revealed that 58% of adults believe AI tools will amplify the dissemination of false information during the upcoming elections.
Big Tech companies respond to apprehensions with proposed restrictions.
Acknowledging these concerns, major tech companies are taking steps to address potential misuse of AI in election contexts. Google, for instance, plans to restrict election-related prompts that its chatbot Bard and search generative experience respond to leading up to the US Presidential election. Meta, the parent company of Facebook, intends to prevent political campaigns from using new gen AI advertising products. Advertisers on Meta platforms will also be required to disclose the use of AI tools in altering or creating election ads. OpenAI has reportedly revamped its content moderation strategies in response to worries about the spread of disinformation through its products.
Experts express concerns over the impact on democracy.
Highlighting the potential ramifications, Alicia Solow-Niederman, an associate professor of law at George Washington University Law School, emphasizes that generative AI tools can have a profound impact on the democratic fabric. Citing legal scholars Danielle Citron and Robert Chesney, she mentions the concept of 'the liar’s dividend,' suggesting that when truth becomes elusive, trust erodes, posing a significant threat to the electoral system and the ability to self-govern.
In conclusion, the intersection of generative AI and political narratives presents a complex challenge that goes beyond the 2024 Presidential race, raising questions about the integrity of information and its consequences for the democratic process.