The European AI Act is set to effect at the end of 2023 and can have far-reaching consequences for organizations using artificial intelligence. Many organizations are dealing with similar challenges when preparing for the AI Act, and being aware of these challenges can be the first step to compliance. This conclusion comes from a paper co-authored by me. This paper investigates how well-prepared organizations are for compliance with the AI Act and identifies challenges that occurred in numerous organizations. Being aware of these challenges and reflecting on how your organization deals with these issues can help to use AI more responsibly.
The AI Act focuses on a specific category of AIs called ‘high-risk’. The rules in the act apply only to high-risk AI models and count as guidelines for other lower-risk AIs. Numerous factors influence whether an AI is high-risk, but a high-risk AI can generally negatively impact human safety, rights, and discrimination. Do note that our paper does not focus on whether AIs are high-risk, but we have found that many organizations are unsure if this applies to their applications. But since the AIA is relevant for all AI models, albeit as guidelines instead of regulations, being aware of the challenges identified in the paper can still help to ensure the reliability and responsibility of the model.
The paper "Complying with the EU AI Act" delves into the impact of the proposed EU AI Act on organizations. Different aspects of the AI Act are categorized to understand the compliance landscape, and a questionnaire is designed to gather insights directly from organizations. Through interactive interviews and an online survey, valuable data is obtained, which allows for computing a "compliance score" in each category. The paper shows that many organizations are not fully prepared, particularly for technical documentation and data/model bias training. The study identifies prevalent questions and challenges organizations face concerning the AI Act, pointing towards areas where improvement is essential.
The paper identifies five different categories. Each category will be discussed along with the most significant challenge observed, starting with the category that scored the lowest among all respondents and thus requires most attention.
All organizations must assess their AI systems and reflect on the challenges mentioned to determine how to improve their AI implementation. This can be a daunting task. Therefore at Babelfish, we help organizations address these challenges to embrace AI responsibly. Stay tuned for more updates and insights on AI compliance and best practices.
The full article is available at Arxiv