Understanding High-Risk AI: Assessing the Criteria

Understanding High-Risk AI: Assessing the Criteria

In the rapidly evolving landscape of artificial intelligence, the European Union's AI Act has garnered significant attention for its potential impact on organizations using AI systems. One key aspect of the AI Act revolves around high-risk AI applications because the act is enforceable for high-risk models and is only a guideline for all other models. But what exactly constitutes "high risk"? To answer this question, we will first look at how the act defines high-risk AI, and then we will provide a 10-step plan to determine if the AI models in your organization constitute high-risk.

The Definitions of High-Risk

The AI Act states two ways to identify an AI as high-risk. The first definition relates to Annex II of the act, which contains a list of Union harmonization legislation. These regulations pertain to various products and sectors that undergo conformity assessments and market surveillance within the European Union. However, the mere presence of an AI system in one of the categories listed in Annex II does not automatically classify it as high-risk. To determine if an AI system is high-risk, two conditions must be met:

  1. The AI system is intended to be used as a safety component of a product covered by the Union harmonization legislation specified in Annex II.
  2. The product, either incorporating the AI system as a safety component or the AI system itself as a standalone product, is required to undergo a third-party conformity assessment for market placement, as per the applicable Union harmonization legislation.

When both these conditions are fulfilled, an AI system is classified as high-risk under the AI Act. So, it’s necessary to evaluate whether the AI is intended for use as a safety component in a product covered by the specific regulations and directives listed in Annex II and whether the relevant product is subject to third-party conformity assessment under the Union harmonization legislation.

The second definition of what constitutes high-risk relates to Annex III of the EU AI Act. This annex gives a comprehensive list that directly identifies high-risk AI systems. Unlike Annex II, which serves as a reference for potentially high-risk domains, Annex III provides a definitive classification of AI systems considered high-risk. The listed AI systems in Annex III are intended for use in specific domains and are categorized into eight areas. A summary of the categories is listed:

  1. Biometric identification and categorization of natural persons: AI systems used for real-time and post-remote biometric identification.
  2. Management and operation of critical infrastructure: AI systems employed as safety components in managing road traffic and essential utilities.
  3. Education and vocational training: AI systems that determine access to educational institutions and assess students or candidates.
  4. Employment, workers management, and access to self-employment: AI systems involved in recruitment, selection, promotion, termination, and performance evaluation of workers.
  5. Access to essential private and public services and benefits: AI systems used for evaluating eligibility for public assistance, creditworthiness, and dispatching emergency services.
  6. Law enforcement: AI systems used for risk assessments, lie detection, deep fake detection, evidence reliability evaluation, criminal offense prediction, profiling, and crime analytics.
  7. Migration, asylum, and border control management: AI systems used for assessing risks posed by individuals entering or present in a Member State, verifying travel documents, and processing asylum, visa, and residence permit applications.
  8. Administration of justice and democratic processes: AI systems assisting judicial authorities in researching, interpreting facts, applying the law, and making decisions in legal contexts.

The 10-Step Plan

If the AI is not determined high-risk after checking with both Annex II and III, the AI still needs to be cleared. The AI Act acknowledges that AI is such a fast-paced field that it is not possible to create an exhaustive list of all AIs that constitute high-risk. Therefore, the commission has the right to update the list after the act is active by adding more domains. This makes it worthwhile to better assess your organization’s AI models beyond the lists from the annexes to prepare yourself for future changes. The following 10-step plan can help to perform such an assessment:

categorization
  1. Define the Purpose and Scope: Organizations must examine the intended purpose of their AI system and the scope in which the AI system will operate.
  2. Refer to the AI Act Categories: Refer to the above-mentioned categories from Annex III to determine if the AI operates within one of these domains.
  3. Assess the AI's Impact: Consider how the AI system's outputs or decisions could affect people's health, rights, and potential for discrimination. For instance, an AI system used in healthcare to diagnose diseases may have significant consequences for patients if it makes inaccurate or biased decisions. What would be the consequences if the model works better for certain demographics than others? What would be the impact of a false positive? And what of a false negative?
  4. Analyze Autonomy Level: Evaluate the degree of autonomy the AI system possesses. If the system makes critical decisions without human intervention, it may indicate a higher risk level.
  5. Evaluate Data Handling: Assess the data the AI system uses and how it handles sensitive personal information. Determine whether the system's data processing could affect individuals' rights or lead to potential privacy and security concerns.
  6. Predict Potential Misuse: Identify possible misuse or manipulation scenarios that could lead to harm or discrimination. Consider how the system might be vulnerable to intentional biases or exploitation. The act states that a model's provider should also consider the impact of reasonable misuse. Could someone use your model for something other than what it is intended for? Is the AI integrated into critical infrastructure? What is the potential consequence of a cyberattack?
  7. Document Your Assessment: Document the entire risk assessment process, including the criteria used, the analysis performed, and the conclusions reached. This documentation will be essential for compliance and transparency.
  8. Engage with Experts: Seek input and advice from AI, legal, and other relevant stakeholders to ensure a comprehensive assessment.

The AI Act recognizes that identifying high-risk AI is not a one-size-fits-all process. By conducting thorough risk assessments and documenting their findings, organizations can ensure compliance with the AI Act and contribute to AI technology's responsible and ethical use.

Once your organization has determined that the AI model is high-risk, the next step is the conformity assessment. Under the AI Act, high-risk AI systems are subject to a conformity assessment, which involves the participation of a third-party notified body. This designated independent organization will review and verify your organization's risk assessment to ensure compliance with the AI Act's stipulations.

In conclusion, identifying high-risk AI requires an evaluation of the AI system's purpose, potential impact, data use, and context. The criteria provided in the AI Act serve as a foundation, but organizations should also consider other factors specific to their application and industry. By carefully assessing the risk level of their AI systems, organizations can ensure compliance with the AI Act and contribute to the responsible and ethical deployment of AI in our society. If you have any further questions concerning the role of AI in your organizations, feel free to contact us.

Sources:

Relevant sections of the AI Act as stated on: https://artificialintelligenceact.eu/the-act/


Article 5.3.2

The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high-risk does not only depend on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.

Chapter 1 of Title III sets the classification rules and identifies two main categories of high- risk AI systems:

  • AI systems intended to be used as safety component of products that are subject to third party ex-ante conformity assessment;
  • Other stand-alone AI systems with mainly fundamental rights implications that are explicitly listed in Annex III.

This list of high-risk AI systems in Annex III contains a limited number of AI systems whose risks have already materialised or are likely to materialise in the near future. To ensure that the regulation can be adjusted to emerging uses and applications of AI, the Commission may expand the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology.

As regards stand-alone high-risk AI systems that are referred to in Annex III, a new compliance and enforcement system will be established. This follows the model of the New Legislative Framework legislation implemented through internal control checks by the providers with the exception of remote biometric identification systems that would be subject to third party conformity assessment. A comprehensive ex-ante conformity assessment through internal checks, combined with a strong ex-post enforcement, could be an effective and reasonable solution for those systems, given the early phase of the regulatory intervention and the fact the AI sector is very innovative and expertise for auditing is only now being accumulated. An assessment through internal checks for ‘stand-alone’ high-risk AI systems would require a full, effective and properly documented ex ante compliance with all requirements of the regulation and compliance with robust quality and risk management systems and post-market monitoring. After the provider has performed the relevant conformity assessment, it should register those stand-alone high-risk AI systems in an EU database that will be managed by the Commission to increase public transparency and oversight and strengthen ex post supervision by competent authorities. By contrast, for reasons of consistency with the existing product safety legislation, the conformity assessments of AI systems that are safety components of products will follow a system with third party conformity assessment procedures already established under the relevant sectoral product safety legislation. New ex ante re-assessments of the conformity will be needed in case of substantial modifications to the AI systems (and notably changes which go beyond what is pre-determined by the provider in its technical documentation and checked at the moment of the ex-ante conformity assessment).


Art. 6 and 7

  1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

(a)  the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;

(b)  the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

  1. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.

Article 7 - Amendments to Annex III

  1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:

(a)  the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;

(b)  the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

  1. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:

(a)  the intended purpose of the AI system;

(b)  the extent to which an AI system has been used or is likely to be used;

(c)  the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;

(d)  the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;

(e)  the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

(f)  the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age;

(g)  the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible;

(h)  the extent to which existing Union legislation provides for:

effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;

effective measures to prevent or substantially minimise those risks.

Previous Post Next Post