At a recent parole board meeting in Louisiana, a veteran mental health doctor provided testimony regarding the potential release of a convicted murderer. Regrettably, this assembly drew the focus of online trolls who, employing artificial intelligence (AI) tools, altered images of the doctor to produce explicit content, subsequently shared on 4chan, a well-known anonymous message board.
According to Daniel Siegel, a Columbia University graduate student researching AI's malicious exploitation, this incident reflects a broader trend on 4chan. Users employ AI-powered tools, like audio editors and image generators, to spread offensive content about individuals appearing before the parole board.
While these manipulated images have yet to extend beyond 4chan, experts warn that this showcases the potential for sophisticated AI tools to amplify online harassment and hate campaigns in the future.
Callum Hood, Head of Research at the Center for Countering Digital Hate, highlights that fringe platforms like 4chan often serve as early indicators of how new technologies, such as AI, can be utilized for extreme ideologies. Hood notes that these platforms, populated by tech-savvy youth, quickly adopt new technologies to project their ideas into mainstream spaces.
Artificial Images and AI Pornography:
AI tools like Dall-E and Midjourney, designed to generate images from text descriptions, are now being used to create fake pornography by removing clothes from existing images. Francis Abbott, Executive Director of the Louisiana Board of Pardons, acknowledges the challenge: "Any images portraying our board members negatively, we would definitely take issue with." Several states, including Illinois, California, Virginia, and New York, are adapting laws to address AI-generated pornography.
Cloning Voices:
ElevenLabs' AI tool, capable of replicating anyone's voice based on typed text, raised concerns when 4chan users circulated fake clips of celebrities saying controversial statements. Despite limitations imposed by ElevenLabs, AI-generated voices continue to spread disinformation on platforms like TikTok and YouTube. President Joe Biden has issued an executive order requiring companies to label AI content, urging the Commerce Department to develop standards for authentication.
Custom AI Tools:
Meta's open-source strategy, aimed at sharing AI software code with researchers, backfired when its language model, Llama, leaked onto 4chan. Users manipulated the code to create chatbots producing antisemitic ideas. This incident highlights how open-source AI tools can be exploited by savvy users to propagate harmful content.
As AI technology advances, regulating bodies and technology companies face the challenge of curbing its misuse. The incidents on 4chan underscore the need for a proactive approach to mitigate the potential harm caused by these advanced tools in the digital realm.