Requirements for High Risk AI Systems in the EU
If your AI system falls into the High Risk category, what are the requirements it must comply to operate in the European Union?
Hello everyone.
In my previous posts I’ve covered already:
The Prohibited Artificial Intelligence Practices in the European Union1
The definition of High Risk AI Systems in the European Union2
Now, we will discuss what are the Requirements for those AI Systems classified as High Risk by the European Union AI Act3.
High Risk AI Systems
The Chapter III: Section 2 is aptly named Requirements for high risk AI systems.
It’s first article clearly states that the requirements defined in this section are of mandatory compliance for the AI systems classified as High Risk, and subject to evaluation given the state of the art in AI, and their intended purpose.
Also it makes clear that the responsibility of ensuring compliance of the High Risk AI systems subject to this regulation are the providers of such systems.
And it is allowed, to avoid duplication of procedures, to integrate the risk management procedures into the ones that might already exist for the products who are already under oversight, listed in the Annex I of the EU AI Act.
Risk Management System
High Risk AI Systems are required to be monitored and managed through a Risk Management System.
It is defined as a continuous and iterative process that must be present during the whole lifecycle of the High Risk AI System, and must be updated if needed accordingly.
The Risk Management System must be targeted at the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose (taken from the Risk Management Section of the EU AI Act).
It follows the typical risk management process:
Identify the risks, as stated previously
Estimate and evaluate the risks that can emerge from usage for their intended purpose, and also reasonably foreseeable misuse
Evaluate other risks that can come from the Post-Market Surveillance defined in the EU AI Act (Article 72)
Adopt the appropriate and targeted risk management measures
According to the Article, the only risks that are of concern are those that can be reasonably eliminated or mitigated through development or technical information (this means not wasting effort and resources on things that cannot be managed).
Once all the risk management measures are taken into account, the remaining overall residual risk needs to be evaluated as acceptable to proceed.
To decide the best possible risk management measures, the following protocol is recommended:
Elimination of risks as far as it is technically feasible, through design and development of the AI System
Mitigation and Control measures for those risks that cannot be eliminated
Training and Information for deployers, with the intent of eliminating or reducing risks under the presumable context where the AI system will be used
High Risk AI Systems should be tested throughout their development process, prior to being placed on the market, to identify the most appropriate risk management measures.
Such tests should ensure the AI systems perform consistently according to their intended purpose.
When real world testing is going to take place, it should do so in accordance with Article 60: Testing of High-Risk AI Systems in Real World Conditions Outside AI Regulatory Sandboxes.
The tests should be carried against prior defined metrics and thresholds, appropriate to the intended purpose.
This is to avoid AI developers trying to conform the tests to the product, instead of measuring the product against the required tests.
If the High Risk AI System is likely to have a negative impact on persons under the age of 18 or vulnerable populations, its risk management system must receive special consideration.
And finally, if the High Risk AI System is part of a product that is already subject to risk management processes under the Union’s law, the new risk management requirements can be combined into them, to avoid duplication.
Data and Data Governance
For High Risk AI Systems that involve the training of AI models, they will require that the data used complies with the quality criteria from this regulation.
Training, validation and testing data sets should be subject to data governance and management practices appropriate for the intended purpose of the High Risk AI System.
Citing the EU AI Act, those practices should be applied to:
the relevant design choices;
data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection;
relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation;
the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent;
an assessment of the availability, quantity and suitability of the data sets that are needed;
examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations;
appropriate measures to detect, prevent and mitigate possible biases identified;
the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed.
It is expected that the training, validation and testing data sets are relevant, sufficiently representative, free of errors and complete given their intended purpose.
They must possess the appropriate statistical properties, taking into consideration the population where the High Risk AI System is intended to be used.
This means that those data sets must be crafted accordingly to the specific geographical, contextual, behavioral or functional setting where the High Risk AI System is expected to operate.
To ensure bias detection and correction, it might be necessary to exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons.
According to the EU AI Act, in addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur:
the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data;
the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation;
the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations;
the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties;
the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first;
the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data.
And finally, High Risk AI Models that are developed without training AI models only need to comply the specified data governance requirements for the testing data sets.
Technical Documentation
The High Risk AI System’s technical documentation needs to be available before the system is placed onto the market or put into service and must be constantly updated.
The technical documentation must be created in a way that demonstrates the High Risk AI System complies with the requirements established on this regulation, so the competent authorities can assess compliance too.
SMEs and startups can provide the documentation defined in Annex IV: Technical Documentation in a simplified manner, that will be established by the Commission.
If the product is covered in the Annex I: List of Union Harmonization Legislation, then it should provide a unified documentation that includes what it is already enforced by the Union legislation and what is required by the EU AI Act.
It is important to note that the Commission is empowered to update the Technical Documentation requirements (Annex IV) as needed due to technical progress in the AI field.
Record Keeping
High Risk AI Systems must allow for automatic logging of events during their entire lifetime at the technical level.
The logged events should be relevant to:
identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1): Risks of National Level or in a substantial modification;
facilitating the post-market monitoring referred to in Article 72; and
monitoring the operation of high-risk AI systems referred to in Article 26(5).
This means that the logging system should be capable of registering events that cover Risks of National Level (Article 79-1) detected during the Post-Market Surveillance (Article 72) by any actor during the AI operation, being especially mentioned the deployers (Article 26-5).
Such logging capabilities should provide, at a minimum:
recording of the period of each use of the system (start date and time and end date and time of each use);
the reference database against which input data has been checked by the system;
the input data for which the search has led to a match;
the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5).
Transparency and Provision of Information to Deployers
High Risk AI Systems must be designed and developed in such a way as to ensure its operation is transparent enough to enable deployers to interpret its output, allowing compliance for both deployer and provider.
The instructions should be provided in an appropriate format, being complete, concise, correct, clear, relevant and accessible to the deployers.
Such instructions must contain at least the following information:
the identity and the contact details of the provider and, where applicable, of its
authorised representative;
the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
its intended purpose;
the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15: Accuracy, Robustness and Cybersecurity, against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2): Risk Management System;
where applicable, the technical capabilities and characteristics of the high- risk AI system to provide information that is relevant to explain its output;
when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;
when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;
where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately;
the changes to the high-risk AI system and its performance which have been pre- determined by the provider at the moment of the initial conformity assessment, if any;
the human oversight measures referred to in Article 14: Human Oversight, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers;
the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;
where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12: Record Keeping.
Human Oversight
High Risk AI Systems must be subject to human oversight during their operation, and for that they must have been built with the proper interfaces.
The purpose of such oversight is to minimize the risks to health, safety or fundamental rights that might come from the AI system.
Such oversight measures must be commensurate with the risks, level of autonomy and context of use of the High Risk AI System, and ensured through one of the two following measures:
when technically feasible, oversight measures should be built into the High Risk AI System before it is deployed on the market or put into service by the provider
if not possible to integrate the oversight measures into the AI system before it is deployed, those measures should be implemented by the deployer.
The High Risk AI System should be provided to the deployer in such a way that the natural persons responsible of human oversight are enabled to:
to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance
to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons
to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;
to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system
to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state
Special care is placed into the use case of AI-powered remote biometric identification, present on Annex III(1a).
The High Risk AI System, operated by the deployer, must not be able to validate the identification unless at least two different natural persons with the necessary competency, training and authority, have validated the result separately.
This requirement of separate verification will not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.
Accuracy, Robustness and Cybersecurity
High Risk AI Systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.
Benchmarking and measurement methodologies will be developed by the Commission with the help of the national metrology and benchmarking authorities.
The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
High Risk AI Systems must take into account the context where they’ll be deployed, and be resilient as possible to errors, especially due to interaction with persons.
To achieve robustness, redundancy, backups and failsafes should be considered as part of the technical solutions.
Another possible source of problems can come from High Risk AI Systems that continue learning after being put on the market.
Feedback loops of biased outputs influencing future inputs should be eliminated or mitigated appropriately.
High-risk AI systems shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities.
The type of attacks considered should include, but not be limited to:
attacks trying to manipulate the training data set (data poisoning)
pre-trained components used in training (model poisoning)
inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion)
confidentiality attacks or model flaws
Conclusion
High Risk AI Systems have a very strict set of requirements they need to be compliant with over their whole lifecycle: from development to deployment and operation, until they are finally retired.
Appointed authorities and regulatory bodies are responsible of collecting the compliance information.
But it is the providers and deployers who in the end share the responsibility over the proper operation and compliance of the High Risk AI Systems under their control.
We’ll review that topic in my next article.
See you in the next one!
Thank you for reading my publication, and if you consider it helpful or inspiring, please share it with your friends and coworkers. I write weekly about Technology, Business and Customer Experience, which brings me lately to write a lot also about Artificial Intelligence, because it is permeating everything. Don’t hesitate in subscribing for free to this publication, so you can keep informed on this topic and all the related things I publish here.
As usual, any comments and suggestions you may have, please leave them in the comments area. Let’s start a nice discussion!
When you are ready, here is how I can help:
“Ready yourself for the Future” - Check my FREE instructional video (if you haven’t already)
If you think Artificial Intelligence, Cryptocurrencies, Robotics, etc. will cause businesses go belly up in the next few years… you are right.
Read my FREE guide “7 New Technologies that can wreck your Business like Netflix wrecked Blockbuster” and learn which technologies you must be prepared to adopt, or risk joining Blockbuster, Kodak and the horse carriage into the Ancient History books.
References
Prohibited Artificial Intelligence Practices in the European Union
https://alfredozorrilla.substack.com/p/prohibited-artificial-intelligence-practices-eu
High Risk AI Systems in the EU AI Act
https://alfredozorrilla.substack.com/p/high-risk-ai-systems-in-the-eu-ai-act
European Union Artificial Intelligence Act (Corrigendum from 2024-04-19)
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf
Thanks for great insight into the future of AI and its implications as I embark tomorrow on designing a children’s program either the help of AI. Responsibly used it can send us out to space! Thanks 😊
Very useful, thank you for sharing! I subscribed to you🤗 We have very similar interests:)