Meta's Oversight Board Calls for Clearer Rules on AI-Generated Explicit Images

Meta's Oversight Board Calls for Clearer Rules on AI-Generated Explicit Images

Meta's Oversight Board announced on Thursday that the company's policies on sexually explicit AI-generated depictions of real people are "not sufficiently clear" and recommended significant changes to prevent such imagery from circulating on its platforms.

The board, which operates independently despite being funded by Meta, reviewed two pornographic deepfake images of famous women posted on Facebook and Instagram. These cases involved female public figures from India and the United States, whose identities were withheld for privacy reasons.

Both images violated Meta's rule against "derogatory sexualized photoshop," categorized under bullying and harassment. The board criticized Meta for not removing the images promptly and called for an immediate policy update to better address these issues.

In the Indian woman's case, a user reported the explicit image, but Meta failed to review it within the 48-hour deadline, leading to the automatic closure of the report without any action. The user's subsequent appeal was also dismissed until the Oversight Board intervened, prompting Meta to take down the image.

In contrast, the AI-generated image of the American celebrity was automatically removed by Meta's systems. The board emphasized the necessity of such automated removals, given the severe harm caused by these images.

"Restrictions on this content are legitimate," the board stated. "Given the severity of harms, removing the content is the only effective way to protect the people impacted."

The board recommended Meta update its policy to explicitly cover a broader range of editing techniques, including generative AI, instead of just "photoshop." This change aims to address the evolving nature of digital manipulation technologies.

Additionally, the board criticized Meta for not adding the Indian woman's image to a database used for automatic removals, as it had done with the American woman's image. Meta's reliance on media coverage to determine which images to include in this database was deemed "worrying" by the board.

"Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance," the board said.

Meta has acknowledged the board's recommendations and will review them, promising to provide an update on any changes adopted.