In a move aimed at enhancing transparency and authenticity in the realm of AI-generated images, OpenAI has begun integrating the Coalition for Content Provenance and Authenticity (C2PA) specifications into images produced by DALL-E 3. The company has revealed plans to incorporate watermarks into these images, featuring both a visual identifier and updated metadata, facilitating easy verification of image sources through C2PA.
The introduction of watermarks serves the purpose of swiftly distinguishing between AI-generated and human-created images for users. Notably, industry giants like Adobe and major camera manufacturers have long been adhering to the C2PA standard. OpenAI has confirmed that visible watermarks will be applied to DALL-E 3 generated images across web, API, and mobile app platforms.
These watermarks will prominently display information such as the image generation date, accompanied by the C2PA logo positioned on the top left corner of each image. Presently, access to the DALL-E 3 image generator is exclusively available to ChatGPT Plus subscribers, priced at $20 per month.
OpenAI asserts that the inclusion of watermarks will not compromise image quality or latency. However, there is a slight increase in image size, ranging from three to five percent when generated via API and up to 32 percent when utilizing the ChatGPT platform.
Despite these measures, the possibility of image tampering remains, with physical cropping and manipulation of metadata being potential avenues for alteration. Notably, screenshots of AI-generated images or uploads to social media platforms often result in the removal of metadata.
Additionally, major tech players have also implemented watermarking mechanisms in their AI-powered image tools. Microsoft incorporates watermarks in images generated using the Bing Image Creator, while Samsung plans to introduce similar features in the upcoming Galaxy S23 series. Meta recently rolled out a feature that applies invisible watermarks to AI-generated content.
The importance of watermarking AI-generated images stems from the prevalent issues surrounding their misuse, including impersonation and the creation of deepfakes featuring celebrities. The inclusion of easily identifiable watermarks enables non-tech-savvy users to discern between original and AI-generated images, thereby aiding in the mitigation of misinformation proliferation.