Risk Assessment & Management Risk management obligations spanning high-risk AI (Art. 9), GPAI with systemic risk (Art. 55), and fundamental rights impact assessment (Art. 27). Cross-cuts prohibited, high-risk, and general-purpose AI categories.
⚡ Cross-cutting theme — spans multiple categories
Articles (3) Article 9
Risk management system
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. 2. The risk management…
6 recitals Article 27
Fundamental rights impact assessment for high-risk AI systems
1. Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area…
3 recitals Article 55
Obligations of providers of general-purpose AI models with systemic risk
1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: (a) perform model…
4 recitals Related Recitals (12) 9 Harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the…
59 Given their role and responsibility, actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to…
60 AI systems used in migration, asylum and border control management affect persons who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the…
61 Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, the rule of…
62 Without prejudice to the rules provided for in Regulation (EU) 2024/900 of the European Parliament and of the Council (34), and in order to address the risks of undue external interference with the…
63 The fact that an AI system is classified as a high-risk AI system under this Regulation should not be interpreted as indicating that the use of the system is lawful under other acts of Union law or…
93 Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a…
94 Any processing of biometric data involved in the use of AI systems for biometric identification for the purpose of law enforcement needs to comply with Article 10 of Directive (EU) 2016/680, that…
108 With regard to the obligations imposed on providers of general-purpose AI models to put in place a policy to comply with Union copyright law and make publicly available a summary of the content used…
109 Compliance with the obligations applicable to the providers of general-purpose AI models should be commensurate and proportionate to the type of model provider, excluding the need for compliance for…
110 General-purpose AI models could pose systemic risks which include, but are not limited to, any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of…
111 It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose AI model with systemic risks. Since systemic risks result from particularly high…
Built by Paul McCormack — lawyer, product leader, and founder of Kormoon. This site is an independent informational resource only and does not constitute legal advice. No reliance should be placed on its contents. For the authoritative text, refer to the official EUR-Lex source linked in the Annexes tab, or consult your legal advisor.