Summary
- The European Union’s AI Act is the first attempt to regulate AI on a global scale. The US and the UK are planning to regulate AI sector by sector (i.e. for finance/medicine).
- Fines for breach can reach 7% of global turnover.
- The AI Act will not only apply within the EU but also to developers and distributors of AI systems outside the EU if their system’s “output” occurs within the EU - i.e. if they have users in the EU.
- Strict regulation is limited to “High risk” categories, including justice, medicine, education, employment and surveillance.
- Here are the key takeaways from the working document. The final publication is due on 9th February 2024.
The Basics
- Aligned Definition: The AI Act aligns its definition of AI with the OECD definition.
- OECD definition of AI - An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment
- Increased Reach: The legislation covers organisations outside the EU if their AI system’s “output” occurs within the EU. Non-EU companies will need to comply with the EU AI Act if they want to operate within the EU.
- Compliance Grace Periods: Organisations are granted grace periods ranging from 6 to 24 months to comply with the AI Act.
- Transparency for Generative AI: Generative AI (technologies like ChatGPT) are subject to the following transparency and disclosure requirements.
- Generative AI systems must be trained, designed and developed with safeguards against the generation of content that breach EU laws.
- There must be a publicly-available summary of any copyrighted training data for generative AI systems.
- Generative AI systems must comply with stronger transparency obligations, for example, if the AI system is used to generate a “deepfake”, the users must disclose that the image is generated/manipulated and indicate the name of the legal or natural person that generated or manipulated the content.
The approach to regulation is based on risk: systems are classified into Prohibited AI, High-Risk AI, Limited Risk AI, and Minimal Risk AI categories.
- Prohibited AI: Applications falling under this category create an unacceptable level of risk and are strictly banned.
- Subliminal, manipulative, or exploitative systems that cause harm.
- Real-time, remote biometric identification systems used in public spaces for law enforcement.
- All forms of social scoring.
- High Risk AI: These applications pose a substantial risk to the health, safety, or fundamental rights of individuals and the environment.
- Biometric and biometrics-based systems.
- Management and operation of critical infrastructure.
- Education and vocational training.
- Employment and workers management.
- Access to essential private and public services and benefits.
- Law enforcement.
- Migration, asylum and border control management.
- Administration of justice and democratic processes.
- Limited Risk AI: These applications pose a lower level of risk, therefore only minimum transparency obligations are proposed.
- Minimal or No Risk AI: These applications are not subject to regulatory restrictions.
- Spam filters.
- Video games.
Prohibited AI
- Social Credit Scoring: Systems involved in social credit scoring are prohibited.
- Exploitative AI: Any AI designed to exploit vulnerabilities, such as age or disability, is banned.
- Behavioural Manipulation: Prohibition of AI systems involved in behavioural manipulation and circumvention of free will.
- Facial Recognition Restrictions: Untargeted scraping of facial images for facial recognition is strictly prohibited. Targeted scraping, which refers to the practice of collecting data from websites or web pages for a permitted purpose, is not prohibited.
- Biometric Categorization: Systems using sensitive characteristics for biometric categorisation face restrictions.
- Predictive Policing: Specific applications of predictive policing are prohibited. This includes the use of AI systems for “making risk assessments of natural persons or groups” to “assess the risk of a natural person for offending or reoffending” as well as “for predicting the occurrence or reoccurrence of an actual or potential criminal offence”.
- Limits on Real-Time Biometric Identification: Law enforcement use of real-time biometric identification in public is restricted to limited, pre-authorised situations.
High-Risk AI
- Medical Devices and Vehicles: High-risk AI includes medical devices and AI integrated into vehicles.
- HR and Worker Management: Recruitment, HR processes, and worker management fall under the high-risk category.
- Education and Vocational Training: AI systems used in education and vocational training are considered high-risk.
- Critical Infrastructure Management: High-risk AI covers the management of critical infrastructure like water, gas, and electricity.
- Emotion Recognition and Biometric Identification: Restrictions extend to AI involved in emotion recognition and biometric identification.
- Law Enforcement and Border Control: AI systems used in law enforcement, border control, migration, and asylum are deemed high-risk.
- Administration of Justice: High-risk AI encompasses applications in the administration of justice.
- Specific Products and Safety Components: Certain products and their safety components are included in the high-risk category.
- An AI system can be classified as high-risk if:
- It’s used as a safety component of a product, or is itself a product covered by the Union Harmonisation Legislation.
- If the product that uses the AI system as its safety component is required to undergo a third-party conformity assessment due to European Union legislation.
Key Requirements for High-Risk AI
- Fundamental Rights Impact Assessment: High-risk AI systems must undergo a fundamental rights impact assessment.
- Conformity Assessment: A conformity assessment is required for high-risk AI systems to ensure that the requirements for regulation are met.
- EU Database Registration: High-risk AI systems must be registered in a public EU database.
- Risk and Quality Management: Implementation of risk and quality management systems is mandatory.
- Data Governance: High-risk AI systems must adhere to data governance practices, including bias mitigation and representative training data, in other words, the data must be of high quality, accurate, complete and relevant to the problem at hand.
- Transparency Measures: Transparency requirements include instructions for use, drawing up of high-quality technical documentation, compliance with EU copyright law and ensuring the data the model was trained on is readily available.
- Human Oversight: Human oversight is mandated, ensuring explainability, auditable logs, and human-in-the-loop mechanisms.
- Accuracy, Robustness, and Cybersecurity: High-risk AI systems must ensure accuracy, robustness, and cybersecurity through testing and monitoring.
Understanding General Purpose AI
- Distinct Requirements for GPAI: General Purpose AI (GPAI) and Foundation Models have distinct regulatory requirements.
- Transparency for GPAI: Transparency requirements cover technical documentation and training data summaries for all GPAI.
- Additional Requirements for High-Impact Models: High-impact models with systemic risk face additional requirements, including model evaluations, risk assessments, adversarial testing, and incident reporting.
- Generative AI Guidelines: Generative AI, such as ChatGPT, mandates user information, labelled AI content, and detectability measures, especially in the case of deep fakes.
Penalties & Enforcement
- Fines for Prohibited AI Violations: Fines of up to 7% of global annual turnover or €35m are imposed for prohibited AI violations.
- Fines for Other Violations: Violations not falling under prohibited AI may incur fines of up to 3% of global annual turnover or €15m.
- Penalties for Incorrect Information: Supplying incorrect information can lead to fines of up to 1.5% of global annual turnover or €7.5m.
- Caps for SMEs and Startups: Fines are capped for SMEs and startups, offering a more balanced enforcement approach.
- Establishment of AI Office and AI Board: The EU establishes the European ‘AI Office’ and ‘AI Board’ to oversee AI regulation at the EU level.
- Market Surveillance Authorities: Authorities in EU countries are designated to enforce the AI Act.
- Individual Complaints: Individuals can make complaints about non-compliance, fostering a collaborative enforcement approach.
How will the EU AI Act affect your business? Is it going to slow down AI adoption in Europe vs. the USA (as critics fear)? Please let us know!