Preparing for the AI Act: the Biggest Challenges

Preparing for the AI Act: the Biggest Challenges

The European AI Act is set to effect at the end of 2023 and can have far-reaching consequences for organizations using artificial intelligence. Many organizations are dealing with similar challenges when preparing for the AI Act, and being aware of these challenges can be the first step to compliance. This conclusion comes from a paper co-authored by me. This paper investigates how well-prepared organizations are for compliance with the AI Act and identifies challenges that occurred in numerous organizations. Being aware of these challenges and reflecting on how your organization deals with these issues can help to use AI more responsibly.

The AI Act focuses on a specific category of AIs called ‘high-risk’. The rules in the act apply only to high-risk AI models and count as guidelines for other lower-risk AIs. Numerous factors influence whether an AI is high-risk, but a high-risk AI can generally negatively impact human safety, rights, and discrimination. Do note that our paper does not focus on whether AIs are high-risk, but we have found that many organizations are unsure if this applies to their applications. But since the AIA is relevant for all AI models, albeit as guidelines instead of regulations, being aware of the challenges identified in the paper can still help to ensure the reliability and responsibility of the model.

AIA-Compliance-Package

The paper "Complying with the EU AI Act" delves into the impact of the proposed EU AI Act on organizations. Different aspects of the AI Act are categorized to understand the compliance landscape, and a questionnaire is designed to gather insights directly from organizations. Through interactive interviews and an online survey, valuable data is obtained, which allows for computing a "compliance score" in each category. The paper shows that many organizations are not fully prepared, particularly for technical documentation and data/model bias training. The study identifies prevalent questions and challenges organizations face concerning the AI Act, pointing towards areas where improvement is essential.

The paper identifies five different categories. Each category will be discussed along with the most significant challenge observed, starting with the category that scored the lowest among all respondents and thus requires most attention.

  • Technical documentation: the AI Act states clear guidelines on what should be included in the technical documentation. Organizations demonstrate the least preparedness in this area because many organizations do not have a system to communicate user-oriented or architectural compliance requirements with the people that write the documentation. The technical documentation is written without an awareness of what information should be documented to comply with the regulations.
  • Data and model: this includes training and testing the model. The biggest challenges relate to bias in the model and the data. Organizations report that they haven’t encountered any risks in two years of dataset usage. This raises questions about the understanding of potential risks. Almost all datasets come with a certain risk when training an AI. Personnel must be trained to identify these risks and mitigate them adequately.
  • User communication: refers to the user of the AI, so if the AI is used within the organization, the user is an employee. The AI Act stipulates that organizations must communicate accepted risks with the user, requiring that metrics are used to determine the risk of rights and discrimination. However, all respondents are unsure which metrics to use to determine the impact on human rights and discrimination. This part of the act remains hard to apply, and organizations should find resources or consult with experts on measuring their models' impact.
  • Risk management: the AI Act stipulates that the risks of a system must be identified and mitigated appropriately. Most organizations have a risk management system, which is sometimes vague and, in most cases, never reviewed. Identifying risks timely is one of the most important ways to ensure that AI development and deployment occurs responsibly.
  • Model monitoring: the study concluded that organizations are most prepared for this area. Monitoring the models after they are released often occurs, and the models are updated when necessary, for instance, if the data is outdated.

All organizations must assess their AI systems and reflect on the challenges mentioned to determine how to improve their AI implementation. This can be a daunting task. Therefore at Babelfish, we help organizations address these challenges to embrace AI responsibly. Stay tuned for more updates and insights on AI compliance and best practices.

The full article is available at Arxiv

Vorige Bericht Volgende Bericht