Meta, the parent company of Facebook, has announced plans to deploy technology capable of detecting and labeling images generated by artificial intelligence (AI) tools from other companies on its platforms, including Facebook, Instagram, and Threads. This initiative aims to address concerns surrounding the proliferation of AI-generated fake content and enhance transparency for users.
Currently, Meta labels AI-generated images produced by its own systems but seeks to expand this capability to images generated by external AI tools. The company's senior executive, Sir Nick Clegg, stated in a blog post that Meta intends to roll out the new labeling system "in the coming months." However, he acknowledged that the technology is still in its early stages and may not be fully mature.
Despite Meta's efforts to combat fake content, experts like Prof Soheil Feizi from the University of Maryland have expressed skepticism about the effectiveness of such tools. Prof Feizi highlighted potential limitations, suggesting that detectors for AI-generated images could be easily evaded through lightweight processing techniques, leading to a high rate of false positives.
Furthermore, Meta's detection technology will not extend to audio and video content, which are often the primary mediums for AI-generated fakes. Instead, the company plans to rely on user self-labeling for audio and video posts, with potential penalties for non-compliance. Additionally, Meta admitted the difficulty of testing for text generated by tools like ChatGPT, indicating the complexity of addressing AI-generated content across various formats.
Meta's announcement comes amid criticism from its Oversight Board regarding the company's policy on manipulated media. The Board deemed Meta's policy "incoherent" and called for updates to better address the challenges posed by synthetic and hybrid content. Sir Nick Clegg acknowledged the need for policy revisions, acknowledging that the current framework may not be suitable for addressing the evolving landscape of fake content.
In January, Meta implemented a policy requiring political advertisements to signal the use of digitally altered images or video, reflecting its ongoing efforts to enhance transparency and combat misinformation on its platforms.