12/03/2024
Prohibited AI practices and high-risk AI systems
Prohibited Artificial Intelligence practices (Chapter II, Art. 5) 1. Introduction to the unacceptable risk category Article 5 categorises certain AI technologies as posing an “unacceptable risk” (Unacceptable Risk). Unlike other risk categories outlined in the AI Act, the use of AI technologies that fall within this category is strictly prohibited ("Prohibited AI Systems"). It is therefore necessary to distinguish between:those technologies that are clearly prohibited; andthose AI applications that are not clearly prohibited but may involve similar risks. The most challenging problem in practice is to ensure that activities, which are not prohibited, do not become Unacceptable Risk activities and therefore prohibited. 2. Unacceptable Risk: Prohibited AI practices Article 5 explicitly bans harmful AI practices: The first prohibition under Article 5 addresses systems that manipulate individuals or exploit their vulnerabilities, leading to physical or psychological harm. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:AI systems designed to deceive, coerce or influence human behaviour in harmful ways; andAI tools that prey on an individual’s weaknesses, exacerbating their vulnerabilities. The second prohibition covers AI systems that exploit these vulnerabilities, even if harm is not immediate. Examples include:AI tools that compromise user privacy by collecting sensitive data without consent; andAI algorithms that perpetuate bias or discrimination against certain groups. The third prohibition focuses on the use of AI for social scoring. Social scoring systems assign scores to individuals based on their behaviour, affecting access to services, employment or other opportunities. Prohibited practices include:AI-driven scoring mechanisms that lack transparency, fairness or accountability; andSystems that discriminate based on protected characteristics (e.g. race, gender, religion). The fourth prohibition covers biometric real-time identification in publicly accessible spaces for law enforcement purposes. This includes:AI systems that identify individuals without their knowledge or consent; andContinuous monitoring of people’s movements using biometric data. 3. Clearly listed: Best practices and compliance Transparency and accountability are essential in complying with the prohibitions under Article 5. Firms using AI must design and continuously test systems, be transparent about their intensions and avoid manipulative practices. They should also disclose AI systems functionality, data usage, and decision-making processes. Companies should conduct thorough impact assessments to identify unintended vulnerabilities and implement specific safeguards to prevent exploitation. This should form part of assessments of AI systems to understand their impact on individuals and society. Companies should develop clear guidelines for scoring systems to prevent the development of social scoring characteristics, and prioritise ethical design, fairness and non-discrimination. Privacy impact assessments should be pursued to ensure compliance with the various prohibitions. In particular, firms should be very careful using any real-time identification systems. In all cases, companies should maintain comprehensive records of AI system design, training, and deployment. Any critical decision made by AI systems should be overseen by a human. 4. Not clearly listed: Categorisation Unacceptable Risk AI systems cover systems that are deemed inherently harmful and are considered a threat to human safety, livelihoods, and rights In contrast, high-risk AI systems cover systems designed to be applied to specific use cases, including using AI for hiring and recruitment that may cause harm but are not inherently harmful. High risk AI systems are legal, but subject to important requirements under the AI Act. It is therefore crucial to determine the difference between high risk and unacceptable risk AI systems. In essence, any high risk activity can escalate to Unacceptable Risk under the following circumstances:Bias and Discrimination: if AI perpetuates bias or discriminates against protected groups. Privacy Violations: when AI systems compromise user privacy or misuse sensitive data. Psychological Harm: if AI manipulates individuals, causing psychological distress. AI systems that are able to perform generally applicable functions and are able to have multiple intended and unintended purposes (being General Purpose AI models) are not inherently prohibited under the AI Act, but must be used with care since in certain scenarios they lead to Unacceptable Risk activities. To assess whether a General Purpose AI Model poses an Unacceptable Risk, it is necessary to consider the context in which the model operates. If it influences critical decisions (e.g. hiring, credit scoring), perpetuates bias or discriminates, compromises user privacy (e.g. by collecting sensitive data without consent), the risk increases, and the model may need to be adapted. 5. Best practice and compliance While the AI Act provides examples of explicit prohibitions under the AI Act, it cannot cover all possible situations as the technology is, through updated versions and by definition, constantly evolving. As a guide, legal and compliance teams should ask the following questions when considering high- risk AI systems:Risk assessment:What is the evidence that the categorisation of the AI application is minimal, limited, high or Unacceptable Risk?Does the application in any circumstances use or act on sensitive data or influence critical decisions?Contextual analysis:Does the application operate in a sector that has a presumption of increased risk, for example, (a) financial services, or (b) healthcare?In what ways does the deployment of the application impact (a) individuals, and (b) society?Specific criteria:Can any decisions of the application be considered to give rise to manipulation, exploitation, discriminatory scoring, or biometric identification?Does the application operate or have access to data that could give rise to the exploitation of subliminal techniques or vulnerabilities related to protracted characteristics, such as age or disability?Transparency and Documentation:In what ways is the AI system transparent about its inherent functioning and decision-making?In what ways does the user’s documentation of the design, training and deployment of the application demonstrate compliance with the various rules? 6. Conclusion Unacceptable Risk AI activities are those practices that pose inherent harm to people and are strictly forbidden under the AI Act. The potential for reputational damage and regulatory sanctions serve as strong deterrents for firms to avoid breaching these provisions of the AI Act. It is essential for companies to take proactive measures to ensure compliance and prevent harm to individuals and society.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our privacy policy.