Meta's Oversight Board Calls for Clearer Policies on Non-Consensual Deepfake Images

Meta's Oversight Board Calls for Clearer Policies on Non-Consensual Deepfake Images

Meta's policies on non-consensual deepfake images require urgent updates, according to the company's oversight panel, which cited a lack of clarity in the current guidelines. This decision came on Thursday in response to cases involving AI-generated explicit depictions of two famous women.

The quasi-independent Oversight Board criticized Meta for failing to promptly remove a deepfake intimate image of a famous Indian woman until the board intervened. Although the woman's identity was not disclosed, the incident highlighted significant flaws in Meta's handling of non-consensual deepfake content.

Deepfake nude images of celebrities, including Taylor Swift, have become widespread on social media due to the increasing accessibility of the technology. This has led to mounting pressure on online platforms to address the issue more effectively.

The Oversight Board, established by Meta in 2020 to oversee content on platforms like Facebook and Instagram, reviewed two cases involving AI-generated images of famous women—one Indian and one American. Both women were described only as “female public figures” without revealing their identities.

Meta acknowledged the board's recommendations and is currently reviewing them. In one case, an AI-manipulated image of a nude Indian woman was reported as pornography on Instagram. The report was not reviewed within the 48-hour deadline and was automatically closed. After an appeal to Meta was also automatically closed, the user turned to the Oversight Board, which eventually led Meta to recognize its mistake and remove the image. Meta also disabled the account that posted the image and added it to a database used to detect and remove similar violating content automatically.

The second case involved an AI-generated image depicting an American woman nude and being groped, posted to a Facebook group. This image was automatically removed because it was already in the database. A user appealed the takedown to the board, which upheld Meta's decision.

The board stated that both images violated Meta’s ban on “derogatory sexualized photoshop” under its bullying and harassment policy. However, it recommended clearer policy wording, suggesting the term “non-consensual” replace “derogatory” and that the rules should encompass a broader range of editing and media manipulation techniques beyond just “photoshop.” The board also recommended that deepfake nude images fall under community standards for “adult sexual exploitation” rather than “bullying and harassment.”

The board expressed alarm at Meta's reliance on media reports to add images to its database, noting that many victims of deepfake intimate images are not public figures and often have to report each instance themselves. Additionally, the board criticized Meta's practice of auto-closing appeals involving image-based sexual abuse after 48 hours, highlighting potential significant human rights impacts.

Meta, formerly known as Facebook, launched the Oversight Board in 2020 to address criticisms of its slow response to misinformation, hate speech, and influence campaigns on its platforms. The board comprises 21 members, including legal scholars, human rights experts, and journalists.