California Governor Gavin Newsom recently signed a series of bills designed to address the dangers of AI-generated content, particularly deepfakes. With 29 additional pieces of AI-related legislation awaiting his action before the end of the legislative session on September 30, the state is taking significant steps to regulate AI’s impact on various sectors.
Among the pending proposals is SB 1047, a highly debated bill that would require AI developers to implement safeguards against potential disasters, including mass casualty events or cyberattacks. Governor Newsom has yet to reveal his stance on this bill.
Two of the newly signed bills aim to protect actors and performers, both living and deceased, from unauthorized AI-generated reproductions of their likenesses.
AB 2602 mandates contracts for using AI-generated deepfakes of a performer’s voice or image, ensuring that performers have professional representation during contract negotiations.
AB 1836 prohibits commercial use of deepfake reproductions of deceased performers in media such as films, TV shows, video games, and sound recordings without consent from their estates.
“We continue to wade through uncharted territory when it comes to how AI and digital media are transforming the entertainment industry, but our North Star has always been to protect workers,” said Newsom. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”
Several of the bills signed into law specifically target the misuse of AI-generated content, especially in the realm of sexually explicit deepfakes:
SB 926 criminalizes the creation and distribution of sexually explicit AI-generated images that appear real when intended to cause emotional harm.
SB 981 requires social media platforms to offer mechanisms for reporting sexually explicit deepfakes. Once flagged, the platform must temporarily block the content while investigating and remove it if confirmed.
SB 942 mandates that generative AI systems place invisible watermarks on all content they create and offer free tools to detect these markers, allowing AI-generated content to be identified more easily.
“Nobody should be threatened by someone on the internet who could deepfake them, especially in sexually explicit ways,” Newsom stated. “We’re stepping up to protect Californians from the darker sides of AI.”
In light of growing concerns over AI’s role in spreading disinformation during elections, four bills have been enacted to prevent the use of deepfakes in political campaigns:
AB 2655 requires large online platforms to label or remove deceptive or digitally altered political content during specific periods around elections. It also mandates reporting mechanisms for users.
AB 2839 extends the timeframe during which political entities are barred from knowingly distributing deceptive AI-generated content in advertisements or election material.
AB 2355 obligates political ads that use AI-generated content to disclose that the material has been digitally altered.
AB 2905 requires robocalls to inform recipients if the voice on the call is artificially generated.
“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation,” Newsom said. “These measures will help to combat the harmful use of deepfakes in political ads and content, protecting transparency and trust in the electoral process.”
As California continues to navigate the rapid advancements in AI, the state is positioning itself at the forefront of efforts to regulate and control the potential misuse of this technology in entertainment, personal privacy, and politics.