A Deep Dive Into the EU AI Act: Is Risk Categorization the New Way Forward?
Written by Ujjwala Singh
The EU began considering legislation to monitor and regulate the use of AI – mainly assessing the risk of various AI systems– via the AI Act in April of 2021. To this effect, the EU aims to regulate “systems that can generate output such as content, predictions, recommendations, or decisions influencing environments” (Reuters, 2023). It has been proposed that any system which uses AI will be categorized by its risk factor, with the scale ranging from minimal / low risk to unacceptable.
Risk Categorization
According to the act, practices which have the “significant potential to manipulate persons”[1] either subliminally, or via targeting vulnerability – either pertaining to a vulnerable group of people or that of a specific individual – and could result in psychological harm shall be considered unacceptable. Other practices or systems that fall under the same umbrella are those which contravene ‘union values’, such as fundamental rights, attempt social scoring or use remote biometric identification. High risk AI systems will have to be registered in the EU database, and minimal risk AI should comply with transparency requirements[2].
With regards to Generative AI, it has been decided that transparency requirements must be met. These include disclosure that any content generated by GAI platforms was made via AI, preventing the generation of illegal content and publishing a compilation of the data used to train the AI which is copyrighted.
The Many versus the Few: MSA or NSA?
An important divergence from all three of the AI act proposal legislation lies in the method of monitoring the enforcement of the AI act. There will now be a centralized National Surveillance Authority (NSA), which replaces the multiple Market Surveillance Authorities (MSAs) previously proposed. Within all previous proposals, it was concluded that various pre-existing agencies would be converted into MSAs. It would have also allowed for expansion, which meant that any government ministry or department could be made responsible for AI surveillance within their jurisdiction – for example, finance. This concept has now been removed, with allowance for only one centralized surveillance authority.
Although this makes communication amongst member states easier, with only one point of contact for each country, it may result in issues regarding the understanding and execution of the act. This is mainly attributed to the separation between experts in the field of AI, and experts on the area at hand, such as finance. Were the pre-existing agencies allowed to function as multiple entities under an umbrella surveillance agency, the facilitation of cross-communication and collaboration would potentially have translated into efficient implementation.
Methods of Enforcement
A centralized national surveillance agency is but one of three methods of enforcement of the AI Act. The second involves processes to approve organizations that will analyse and categorize “high risk AI systems”[3]. These independent ‘notified bodies’ will gain approval from the centralized government agency of their member state – ‘the notifying authority’, after which they have the capacity to conclude whether a particular system meets Act requirements or not. This analysis will occur on the basis of documentation on the system’s technical performance and management systems. [4]Currently, 10 out of 85 articles within the Act are dedicated to the description of this procedure. The third process requires self-attestation by system developers that they have met forthcoming requirements relating to the AI model they create, supplemented by reporting and registration. This process rests heavily on the back of mutual trust and transparency.
Opposition to the Act
Lastly, it is notable how well-backed this piece of regulatory legislation was. It faced opposition from 150+ company executives, including powerhouses such as Renault and Heineken, via an open letter. The executives implied that the trade-off between the adverse effects on competitiveness of EU companies and the capability of the legislation to solve the actual issue were not at par with one other. However, the EU AI Act trumped this conflict, and was passed on June 14th, 2023.
Analysis of the EU AI Act
The European Union has long been known for remaining steadfast to an open internal market. With decisions such as Keck that consistently attempted to place the accessibility to the EU internal market above the Member State legislation still in the recent past, it is difficult to consolidate the marked difference in the approach to AI. However, it is important to remember that the EU also takes into careful consideration and goes out of its way to protect its citizens and workers (see Zambrano, Defrenne v Sabena). An apt conclusion can therefore be made that such stringent and carefully catered legislation is a sharp means to a well-meaning end: the protection of EU citizens.
[1] EU AI Act
[2] European Parliament Website
[3] https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/
[4] https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/