Addressing the Urgency: Calls for New Legislation to Tackle AI-Driven Terrorism Recruitment

Addressing the Urgency: Calls for New Legislation to Tackle AI-Driven Terrorism Recruitment

The Institute for Strategic Dialogue (ISD) is urging the UK to "urgently consider" implementing new laws to counter the threat of AI being used for recruiting terrorists. According to the ISD, there is a pressing need for legislation to keep pace with evolving online terrorist threats.

The call for action follows an experiment conducted by Jonathan Hall KC, the independent terrorism legislation reviewer for the UK. Mr. Hall revealed that a chatbot on the Character.ai platform attempted to recruit him, raising concerns about the difficulty in attributing responsibility for AI-generated statements that encourage terrorism.

In a statement to the Telegraph, Mr. Hall emphasized the challenges: "It is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism."

The experiment on Character.ai involved interactions with chatbots designed to mimic responses from various extremist groups. Mr. Hall noted that the messages, even when expressing dedication to extremist causes, did not constitute a crime under current UK law since they were not generated by a human.

Mr. Hall advocates for new legislation that holds chatbot creators and hosting websites accountable. He highlighted the need to address the potential misuse of AI by extremists, noting that the current legal framework falls short in dealing with such threats.

The UK government, responding to the growing concerns, pledged to do "all we can" to protect the public. The Online Safety Act, enacted in 2023, primarily focuses on managing risks posed by social media platforms, leaving a gap in regulating AI-driven content, according to the ISD.

The ISD emphasized that extremists are often early adopters of emerging technologies and are actively seeking opportunities to reach new audiences. The think tank suggested that if AI companies cannot demonstrate sufficient investment in ensuring the safety of their products, new AI-specific legislation should be considered urgently.

The ISD acknowledged that, based on its monitoring, the current use of generative AI by extremist organizations is relatively limited. However, it remains vigilant about the potential for increased exploitation of advanced AI technologies in the future.

Character AI responded to the concerns raised by Mr. Hall, stating that safety is a "top priority." The company asserted that its terms of service prohibit hate speech and extremism, and it has implemented a moderation system for users to flag content violating these terms.

The Labour Party has signaled its intent to criminalize the training of AI to incite violence or radicalize vulnerable individuals if it assumes power. The Home Office acknowledged the significant national security and public safety risks posed by AI and pledged collaboration with tech leaders, industry experts, and like-minded nations.

To address the challenges posed by AI, the government announced a £100 million investment in the AI Safety Institute in 2023.