Deepfake Advertisements Impersonating Rishi Sunak Flood Facebook Ahead of General Election

Deepfake Advertisements Impersonating Rishi Sunak Flood Facebook Ahead of General Election

Over 100 deepfake video advertisements impersonating Rishi Sunak have been identified on Facebook in the past month, raising concerns about the potential manipulation of public opinion as the general election approaches.

Scope and Impact:

These deceptive ads, violating several of Facebook's policies, may have reached around 400,000 people, marking the first systematic mass doctoring of the prime minister's image. A total of £12,929 was spent on 143 ads originating from 23 countries, including the US, Turkey, Malaysia, and the Philippines.

Deceptive Content and Misinformation:

The deepfake videos include fabricated footage of a BBC newsreader, Sarah Campbell, falsely reporting a scandal around Sunak earning "colossal sums" from a project initially intended for ordinary citizens. The misleading claim suggests Elon Musk launched an application to "collect" stock market transactions, accompanied by a faked clip of Sunak endorsing the government's decision to test the application rather than risking public funds.

Quality of Deepfakes and Election Vulnerabilities:

Fenimore Harper, a communications company led by Marcus Beard, a former Downing Street official, conducted the research. Beard expressed concern about the quality of these deepfakes, emphasizing the risk of manipulating elections through a significant quantity of high-quality AI-generated falsehoods. He highlighted the ease with which voice and face cloning tools can be misused for malicious purposes.

Response from Authorities:

The research underscores the lax moderation policies on paid advertising, with several of the identified ads still active. Meta, the company that owns Facebook, has been approached for comment. The UK government has reassured ongoing efforts to respond rapidly to threats to democratic processes. The Online Safety Act aims to hold social platforms accountable for swiftly removing illegal misinformation, including AI-generated content.

Concerns and Urgency for Legislative Action:

Regulators are expressing growing concerns about the limited time available to enact substantial changes to safeguard Britain's electoral system against AI advances before the expected general election in November. Discussions between the government and regulators, including the Electoral Commission, focus on new legislation requiring digital campaign material to include an "imprint" to enhance transparency.

Social Platforms' Responses:

Meta stated that content violating policies, whether AI-generated or human-created, is promptly removed. The company highlighted its efforts since 2018 to provide transparency for ads related to social issues, elections, or politics.

Conclusion:

The proliferation of deepfake advertisements impersonating political figures poses a serious threat to the integrity of democratic processes. As technology advances, urgent legislative action and enhanced moderation policies are crucial to prevent the manipulation of public opinion and ensure the reliability of information disseminated on social media platforms.