The UK government has announced a significant investment of over £100 million to support an agile approach to regulating artificial intelligence (AI). This funding includes £10 million dedicated to preparing and upskilling regulators to effectively address the risks and opportunities associated with AI across various sectors such as telecoms, healthcare, and education.
This investment comes at a crucial juncture, as research conducted by Thoughtworks reveals that 91% of British citizens believe government regulations should play a more substantial role in holding businesses accountable for their AI systems. Transparency is a key concern, with 82% of consumers favoring businesses that proactively communicate their AI regulation strategies.
In response to last year's AI Regulation White Paper consultation, the UK government outlined its context-based regulatory approach. This approach empowers existing regulators to address AI risks in a targeted manner, while avoiding hastily drafted legislation that could hinder innovation.
For the first time, the government articulated its considerations regarding potential future binding requirements for developers constructing advanced AI systems, aiming to ensure accountability for safety—a measure supported by 68% of the public.
Furthermore, all key regulators are set to publish their approaches to managing AI risks by April 30, providing businesses and citizens with transparency and confidence. However, despite these efforts, there remains some skepticism, with 30% of individuals expressing doubts about the benefits of increased AI regulation.
Additionally, nearly £90 million will be allocated to establish nine new research hubs across the UK, along with a partnership with the US focused on responsible AI development. Another £2 million in funding will support projects defining responsible AI in sectors like policing, addressing the public's desire for improved user education around AI.
Tom Whittaker, Senior Associate at Burges Salmon, praised the government's financial investment, highlighting its support for an agile and sector-specific approach to AI regulation. This approach aims to position the UK as pro-innovation in AI across multiple sectors, especially in contrast to the EU's advancing AI legislation.
Science Minister Michelle Donelan emphasized the UK's innovative approach to AI regulation, positioning the country as a leader in both AI safety and development. This agile, sector-specific approach enables the UK to effectively manage risks while harnessing AI's benefits.
The comprehensive funding and initiatives underscore the UK's commitment to fostering safe AI innovation and addressing public concerns. This investment builds upon previous commitments, such as the £100 million AI Safety Institute, reflecting the government's dedication to leading on safe and responsible AI progress.
Greg Hanson, GVP and Head of Sales EMEA North at Informatica, noted the increasing demand for AI regulation in the UK, particularly as businesses encounter challenges related to AI governance and ethics.
Overall, this £100 million package of measures represents a significant step towards the UK's goal of leading on safe and responsible AI progress, striking a balance between harnessing AI's economic and societal benefits and addressing regulatory risks.