High Risk AI Systems in the EU AI Act
How do the European Union AI Act classifies an AI System as High Risk, and subject to its regulation and oversight?
In my previous article “Prohibited Artificial Intelligence Practices in the European Union1” we discussed the explicitly prohibited practices regarding the use of AI systems in the EU.
But once we take all of the use cases that are forbidden, we still have those AI Systems that are considered dangerous enough for the wellbeing of the society, so they require special oversight.
Those have the denomination of High Risk AI Systems, and are subject to a series of strict requirements and registration in an ad-hoc EU High Risk AI Systems Database for surveillance once they are in the market.
Classification of AI Systems as High Risk
The first step is to clearly identify which AI Systems are considered as High Risk, so they are covered by the EU regulation.
There are two sources of High Risk AI Systems defined in the EU AI Act:
The European Union Harmonization Legislation, listed on the Annex I from the EU AI Act2
The list of AI Systems provided on the Annex III from the EU AI Act
The Annex I has a list of regulated products and systems, which because of their critical nature are subject to special scrutiny, even without considering AI itself.
The EU AI Act considers that every product and system in that list can be considered a High Risk AI System, if it is implemented using AI as the product itself, or if it uses AI as part of its safety systems.
We will review them here.
High Risk AI Systems from the Harmonization Legislation (Annex I)
This is a complete, but simplified list of the Harmonization Legislation list in the Annex I from the EU AI Act.
Directive of Machinery Regulation
Directive on the Safety of Toys
Directive on Recreational Craft and Personal Watercraft
Directive on Lifts and Safety Components for Lifts
Directive on Equipment and Protective Systems intended for use on Potentially Explosive Atmospheres
Directive on the market of Radio Equipment
Regulation on Cableway installations
Regulation on Protective Equipment
Regulation on Appliances burning Gaseous Fuels
Regulation on Medical Devices
Regulation on In Vitro Diagnostic Medical Devices
Regulation on Common Rules in the field of Civil Aviation Security
Regulation on the market surveillance of two and three wheel vehicles and quadricycles
Regulation on the approval and market surveillance of Agricultural and Forestry vehicles
Directive on Marine Equipment
Directive on the interoperability of the Rail System within the European Union
Regulation on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles
Regulation on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users
Regulation on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency
For the EU AI Act, if an AI system is intended to be used as a safety component for any of the product categories cited before, it is automatically considered a High Risk AI System, for the purposes of this regulation.
It also applies if the AI system itself is the product, and it falls within the categories here mentioned.
For example, if an AI system is designed to perform as part of the safety system of a lift, it is considered a High Risk AI System, and must comply with all the requirements stated by the EU AI Act.
Or if an AI system’s purpose is to control the operation of an appliance that burns gas (for efficiency gains, for example), it is also considered a High Risk AI System.
For greater detail on each product class regulated here, check the EU AI Act Annex I, where you will find the reference to the European Union normative that drives it.
High Risk AI Systems from the Annex III of the EU AI Act
These types of AI Systems will be considered of High Risk, and subject to the EU AI Act regulations and requirements here reviewed.
Biometrics, if their use is allowed by the national laws of the Member State
Remote Biometric Identification Systems (as long as they are not solely used to verify and confirm a person is who he claims to be)
AI Systems intended to be used for biometric categorization
AI Systems intended to be used for emotion recognition
This is one of the most specifically regulated AI use cases in the EU AI Act, as we have explored in my previous article “Prohibited Artificial Intelligence Practices in the European Union”.
Also, the use of AI for Emotion Recognition is regulated only for the workplace and educational institutions, but freely allowed in other contexts (check my article3 on this topic).
Critical infrastructure
AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
This is probably the most obvious one. Critical infrastructure can be the target of cyberattacks, and malfunctions in these systems can lead to severe harm or death for large portions of the population.
Education and vocational training
AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;
AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions;
AI systems intended to be used for monitoring and detecting prohibited behavior of students during tests in the context of or within educational and vocational training institutions.
One of the most promising use cases of AI resides in educational institutions, and we already have many platforms to show.
Nevertheless, this is a very delicate context of application, as it can have a profound negative impact in society if not performed carefully.
AI systems are already being used as a tool for teachers to help assess large amounts of students homework.
Hallucinations and bad AI performance can lead to improper evaluations and grades, that in some cases will lead to additional manual reviewing, and in the worst case, to student failures.
And if AI is used to assess the appropriateness of students for certain careers, courses, or vocational paths, it can lead to missed opportunities that will harm students and society in the long term.
Employment, workers management and access to self-employment
AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics or to monitor and evaluate the performance and behavior of persons in such relationships.
The access to employment opportunities can also be negatively affected by biased or badly trained AI.
We already have AI systems sifting through thousands of job applications, without the intervention of humans in the process.
This has affected the way people craft their resumes, as it is necessary now to assume it won’t be read by a human, but by an AI.
So, techniques have been developed to fool AI systems into giving priority to some resumes over others.
Also job applicants are starting to use AI systems to apply massively to job offers, automating most of the process and trying to trick the HR AI systems into giving them priority.
It’s an arms race.
Access to and enjoyment of essential private services and essential public services and benefits
AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;
AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems;
If governments start using AI systems to assess eligibility for social services, perform triage in emergency situations, etc. then proper measures to avoid biases must be in place.
Also, if the private sector uses AI to evaluate individuals to decide if they should provide them services or not, those systems must be proven fair and unbiased.
Imagine a situation where your access to financial services, healthcare, or insurance is subject to the output of an AI system that has been trained with biases that affect you.
Law enforcement, as long as its use is permitted by the Union or national laws
AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess a natural person’s risk of becoming the victim of criminal offences;
AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;
AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;
AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the likelihood of a natural person of offending or re- offending not solely based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;
AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offenses.
The use of AI systems for law enforcement is the most regulated AI use case in the EU AI Act, in the form of Realtime Biometric Identification Systems.
The risk of employing biased or badly trained AI models in law enforcement can directly impact the fundamental rights of the population.
Profiling systems based on biometric and personal characteristics, face or voice recognition in public spaces, criminal offense prediction systems, etc. are all strictly regulated.
In particular, biometric identification systems must be approved by a judicial authority before being used (like current wiretapping systems).
Also, the AI systems employed must be registered within the EU High Risk AI Systems Database, and every use must be reported to an appointed authority, to be later reported to an EU Commission.
The aggregate results of the use of such law enforcement AI systems will be made public yearly by the EU agency responsible.
If you are interested in the details, check my previous article: “Prohibited Artificial Intelligence Practices in the European Union”.
Migration, asylum and border control management, as long as its use is permitted by the Union or national laws
AI systems intended to be used by competent public authorities as polygraphs and similar tools;
AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;
AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence;
AI systems intended to be used by or on behalf of competent public authorities, including Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.
Every use case where large amounts of data needs to be processed and vetted can potentially benefit from using an AI system.
But as with the previously reviewed scenarios, if the assessment can be afflicted by biases introduced either by data during training, or biased algorithms during development, those systems require special oversight.
Especially if an improper output can impact fundamental rights and freedoms from people, like in the case of migration eligibility assessments.
Administration of justice and democratic processes
AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;
AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organize, optimize or structure political campaigns from an administrative or logistical point of view.
The sheer amount of content that represents the legislation and regulations from a country, or worse, a bloc like the European Union, makes it a prime use case for AI systems.
AI systems trained in the laws, cases and doctrine from the corresponding jurisdictions where they are intended to be used can be of great help in speeding up the administration of justice.
Not only by facilitating the finding of obscure or little known related information, but also by providing summaries and even some level of reasoning and interpretation.
And, as with the other scenarios already reviewed, biases and improper training can lead to severe harm of the wide population.
The same can happen in the use of AI on democratic processes. We have already witnessed the harmful results of this in the Cambridge Analytica scandal.
Exceptions and Amendments
Of course, for every rule there are always exceptions.
In this case, AI systems that could have been classified as High Risk for the purposes of the EU AI Act can be exempted if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.
Also are exempted the use cases that do not directly influence the outcome of a decision process.
This is the exhaustive list:
the AI system is intended to perform a narrow procedural task;
the AI system is intended to improve the result of a previously completed human activity;
the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
This exemption enables the use of AI systems as a tool to enhance human processes, with specific objectives and always under final human oversight.
Regardless of this, any AI system that conducts profiling of natural persons will be considered as High Risk.
If the developer of an AI system classifiable as High Risk considers its special case doesn’t belong to this classification, he can apply for an exemption at the proper EU regulatory body, by presenting an assessment of his reasons.
And finally, the EU Commission will be permanently reassessing the High Risk AI Systems list, to add or modify its use cases.
So we need to check that list periodically, if we are involved either in the development or adoption of such AI systems.
Conclusion
We have reviewed the use cases that are considered by the European Union Artificial Intelligence Act as High Risk AI Systems.
Most of them were already heavily regulated for other reasons, as consumer safety, national security, etc.
As entrepreneurs building products that might be using AI as part of it, organizations considering acquiring and deploying an AI system, or the general public, we need to be aware of which AI use cases are considered as High Risk.
Now that we have identified the High Risk AI System use cases, we will discuss in my next article what are the requirements that such systems must comply to be able to operate in the EU market.
Thank you for reading my publication, and if you consider it helpful or inspiring, please share it with your friends and coworkers. I write weekly about Technology, Business and Customer Experience, which brings me lately to write a lot also about Artificial Intelligence, because it is permeating everything. Don’t hesitate in subscribing for free to this publication, so you can keep informed on this topic and all the related things I publish here.
As usual, any comments and suggestions you may have, please leave them in the comments area. Let’s start a nice discussion!
When you are ready, here is how I can help:
“Ready yourself for the Future” - Check my FREE instructional video (if you haven’t already)
If you think Artificial Intelligence, Cryptocurrencies, Robotics, etc. will cause businesses go belly up in the next few years… you are right.
Read my FREE guide “7 New Technologies that can wreck your Business like Netflix wrecked Blockbuster” and learn which technologies you must be prepared to adopt, or risk joining Blockbuster, Kodak and the horse carriage into the Ancient History books.
References
Prohibited Artificial Intelligence Practices in the European Union
https://alfredozorrilla.substack.com/p/prohibited-artificial-intelligence-practices-eu
European Union Artificial Intelligence Act
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html
Waiter, there's an AI in my soup!
https://alfredozorrilla.substack.com/p/waiter-theres-an-ai-in-my-soup