In the rapidly evolving landscape of artificial intelligence, the European Union's AI Act has garnered significant attention for its potential impact on organizations using AI systems. One key aspect of the AI Act revolves around high-risk AI applications because the act is enforceable for high-risk models and is only a guideline for all other models. But what exactly constitutes "high risk"? To answer this question, we will first look at how the act defines high-risk AI, and then we will provide a 10-step plan to determine if the AI models in your organization constitute high-risk.
The AI Act states two ways to identify an AI as high-risk. The first definition relates to Annex II of the act, which contains a list of Union harmonization legislation. These regulations pertain to various products and sectors that undergo conformity assessments and market surveillance within the European Union. However, the mere presence of an AI system in one of the categories listed in Annex II does not automatically classify it as high-risk. To determine if an AI system is high-risk, two conditions must be met:
When both these conditions are fulfilled, an AI system is classified as high-risk under the AI Act. So, it’s necessary to evaluate whether the AI is intended for use as a safety component in a product covered by the specific regulations and directives listed in Annex II and whether the relevant product is subject to third-party conformity assessment under the Union harmonization legislation.
The second definition of what constitutes high-risk relates to Annex III of the EU AI Act. This annex gives a comprehensive list that directly identifies high-risk AI systems. Unlike Annex II, which serves as a reference for potentially high-risk domains, Annex III provides a definitive classification of AI systems considered high-risk. The listed AI systems in Annex III are intended for use in specific domains and are categorized into eight areas. A summary of the categories is listed:
If the AI is not determined high-risk after checking with both Annex II and III, the AI still needs to be cleared. The AI Act acknowledges that AI is such a fast-paced field that it is not possible to create an exhaustive list of all AIs that constitute high-risk. Therefore, the commission has the right to update the list after the act is active by adding more domains. This makes it worthwhile to better assess your organization’s AI models beyond the lists from the annexes to prepare yourself for future changes. The following 10-step plan can help to perform such an assessment:
The AI Act recognizes that identifying high-risk AI is not a one-size-fits-all process. By conducting thorough risk assessments and documenting their findings, organizations can ensure compliance with the AI Act and contribute to AI technology's responsible and ethical use.
Once your organization has determined that the AI model is high-risk, the next step is the conformity assessment. Under the AI Act, high-risk AI systems are subject to a conformity assessment, which involves the participation of a third-party notified body. This designated independent organization will review and verify your organization's risk assessment to ensure compliance with the AI Act's stipulations.
In conclusion, identifying high-risk AI requires an evaluation of the AI system's purpose, potential impact, data use, and context. The criteria provided in the AI Act serve as a foundation, but organizations should also consider other factors specific to their application and industry. By carefully assessing the risk level of their AI systems, organizations can ensure compliance with the AI Act and contribute to the responsible and ethical deployment of AI in our society. If you have any further questions concerning the role of AI in your organizations, feel free to contact us.
Relevant sections of the AI Act as stated on: https://artificialintelligenceact.eu/the-act/
Article 5.3.2
The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.
Chapter 1 of Title III sets the classification rules and identifies two main categories of high- risk AI systems:
This list of high-risk AI systems in Annex III contains a limited number of AI systems whose risks have already materialised or are likely to materialise in the near future. To ensure that the regulation can be adjusted to emerging uses and applications of AI, the Commission may expand the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology.
…
As regards stand-alone high-risk AI systems that are referred to in Annex III, a new compliance and enforcement system will be established. This follows the model of the New Legislative Framework legislation implemented through internal control checks by the providers with the exception of remote biometric identification systems that would be subject to third party conformity assessment. A comprehensive ex-ante conformity assessment through internal checks, combined with a strong ex-post enforcement, could be an effective and reasonable solution for those systems, given the early phase of the regulatory intervention and the fact the AI sector is very innovative and expertise for auditing is only now being accumulated. An assessment through internal checks for ‘stand-alone’ high-risk AI systems would require a full, effective and properly documented ex ante compliance with all requirements of the regulation and compliance with robust quality and risk management systems and post-market monitoring. After the provider has performed the relevant conformity assessment, it should register those stand-alone high-risk AI systems in an EU database that will be managed by the Commission to increase public transparency and oversight and strengthen ex post supervision by competent authorities. By contrast, for reasons of consistency with the existing product safety legislation, the conformity assessments of AI systems that are safety components of products will follow a system with third party conformity assessment procedures already established under the relevant sectoral product safety legislation. New ex ante re-assessments of the conformity will be needed in case of substantial modifications to the AI systems (and notably changes which go beyond what is pre-determined by the provider in its technical documentation and checked at the moment of the ex-ante conformity assessment).
Art. 6 and 7
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
Article 7 - Amendments to Annex III
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
(a) the intended purpose of the AI system;
(b) the extent to which an AI system has been used or is likely to be used;
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
(f) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age;
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible;
(h) the extent to which existing Union legislation provides for:
effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;
effective measures to prevent or substantially minimise those risks.