On Thursday, several major artificial intelligence companies pledged to take significant steps in combating the spread of harmful sexual imagery generated by AI. The commitment, brokered by the Biden administration, includes removing nude images from training datasets used for AI products and implementing additional safeguards to prevent the misuse of such content.
Tech giants Adobe, Anthropic, Cohere, Microsoft, and OpenAI are among those who have agreed to this voluntary pledge. They will remove nude images from their AI training datasets "when appropriate and depending on the purpose of the model." This move is part of a broader initiative to tackle the alarming rise in image-based sexual abuse and the creation of non-consensual intimate AI deepfakes, which have increasingly targeted women, children, and LGBTQI+ individuals.
The White House's Office of Science and Technology Policy highlighted the urgency of addressing these issues, noting that such harmful uses of AI have "skyrocketed" and are among the fastest-growing problems associated with the technology.
Common Crawl, a key data repository used to train AI models, also joined in this effort. It has pledged to responsibly source its datasets and protect them from image-based sexual abuse, reflecting a broader industry commitment to ethical data management.
Additionally, a separate group of companies, including Bumble, Discord, Match Group, Meta, Microsoft, and TikTok, announced a set of voluntary principles aimed at preventing image-based sexual abuse. This announcement was timed to coincide with the 30th anniversary of the Violence Against Women Act, underscoring the industry's dedication to addressing these critical issues.
These commitments mark a significant step toward ensuring that AI technologies are developed and used responsibly, with a focus on protecting individuals from exploitation and abuse.