EU AI Act Enters Into Force, Setting Strict Rules for Artificial Intelligence

EU AI Act Enters Into Force, Setting Strict Rules for Artificial Intelligence

The European Union's landmark AI Act, aimed at regulating the use of artificial intelligence across its member states, has been published in the Official Journal. Scheduled to take effect on August 1, the law introduces a phased implementation approach, with full applicability expected by mid-2026.

Under the new regulations, AI developers will face varying obligations based on the perceived risk of their applications. Low-risk AI uses will generally remain unregulated, while "high-risk" applications such as biometric identification, law enforcement, and critical infrastructure will be subject to stringent requirements concerning data quality and anti-bias measures.

The legislation also prohibits certain AI applications outright, including practices like social credit scoring and unrestricted facial recognition databases. Developers of general purpose AI models, including powerful systems akin to OpenAI's GPT, will need to adhere to transparency standards and, in some cases, conduct systemic risk assessments based on computational thresholds.

The rollout includes deadlines for different phases of implementation, starting with a list of banned AI uses becoming effective in early 2025. Codes of practice for AI developers will follow in mid-2025, drafted by the newly established EU AI Office amidst ongoing concerns about industry influence.

Notably, compliance deadlines vary: while some high-risk AI systems must comply within 24 months, others have up to 36 months to meet the stringent requirements set forth by the EU's comprehensive AI rulebook.

The enactment of the AI Act signals a significant step towards regulating AI technologies within the European Union, balancing innovation with robust safeguards to protect consumer rights and societal values.