Trustworthy Artificial Intelligence (Part 2)
Foundations of Trustworthy AI
(Source: Bing Image Creator)
This is a four part series. This one is the second part of the series. You can find the others here:
Continuing our discussion about Trustworthy Artificial Intelligence (check the first post here1), now we must “start with why” (like Simon Sinek probably would say). Why is it very important for us to make sure any Artificial Intelligences we develop are Trustworthy, and what means to be Trustworthy?
We, humans, are the ones attempting to create an Artificial Intelligence capable of rivaling ourselves in cognitive abilities (and perhaps surpassing us). Our motivations, beyond the common human impulse of doing it just because we want to prove we can (think of climbing the Mt. Everest) are also of self-benefit.
We want to make our lives easier, and we have been achieving that for quite some time. From Animal Power (when we domesticated oxen, horses and etc. to supplement “manpower”), through the Industrial Revolution (steam power), Electricity and Mass Production, Electronics and Digitalization, all the way to the current Fourth Industrial Revolution: Artificial Intelligence and Robotics.
So, the development of AI is expected to be “Human Centric”. We are the beneficiaries of its creation. But AI is different from all our previous endeavors. Once we get it done, it won’t be just a tool, but it will have conscience, self awareness and most probably its own agency. It might set out to find its own purpose in existence.
AI is at its infancy. Remember the Narrow AI definition. It still can be considered a tool. But it is a tool that is increasingly becoming capable of taking decisions and actions by itself that can potentially affect our lives. So its development must include measures to assure it will exist to help us, by design, instead of just trying to fix it as an afterthought (once something bad has happened).
This brings us to the concept of “trustworthiness” applied to Artificial Intelligence. We need to create a Trustworthy AI. By design, not by chance. And this means having a good definition and framework for that, because the AI efforts are happening all around the world at the same time. So we all must agree on what means to have a Trustworthy AI, how to achieve it with our current tools and techniques, and how to assess if an AI is trustworthy before committing our wellbeing and safety to it.
On this post we will explore how the European Union High Level Expert Group defined the Ethical and Trustworthy AI, which in turn served as the basis for the current European Union Artificial Intelligence Act on the verge of being voted and approved. On the next ones we will review how it is proposed to achieve a Trustworthy AI, and then how to assess and evaluate an AI to determine if it can be considered Trustworthy according to these guidelines.
Trustworthy AI
According to the Ethics guidelines for trustworthy AI2, Artificial Intelligence should be Human Centric. We have already discussed this. We want to maximize our potential benefits while minimizing the associated risks.
This requires not only assessing the final product (the AI tools and systems produced) but the whole development process, in the same way we not only randomly evaluate samples of food already produced, but we also seek to certify the process through it is produced, so it reliably continues to do so in an acceptable way.
So, when we create a Trustworthy AI, and once it exists and operates, it must comply with three components:
It should be Lawful
It should be Ethical
It should be Robust
Each of those is necessary, but not sufficient to ensure trustworthiness. All of them must exist at the same time. Which can be difficult, as it will happen that in some situations there will be conflicts between them: Robustness might be made more difficult because of Lawfulness and Ethical requirements, etc.
When the EU Ethical Guidelines were developed, back in 2019, the AI explosion that we have today wasn’t even thought as a possibility. So the idea was to make them “voluntary” for anyone working on or with AI. Things have changed now. AI has become widespread, and the associated risks have skyrocketed, making it imperative to provide a regulatory framework to deal with it.
This sparked the need to create the European Union Artificial Intelligence Act (explained here3), and fast track it into approval, because AI research, applications and usage won’t wait, and we have already witnessed some examples of misuse, unlawful use cases or ethically challenged situations. The fact that AI development is a cross borders business doesn’t help.
Let’s review each of the AI Trustworthiness components.
Lawful AI
Artificial Intelligence systems are created by people or organizations in a country. Each country has (or doesn’t have) regulations about AI. Then such AI systems are consumed either in the same country or another. Which has its own AI regulations. Those regulations probably will cover not only the AI system’s behavior and capabilities, but also how it was developed and how the service is provided (remember our previous example about not only certifying the final food product, but also the production process for quality and safety).
Artificial Intelligence systems will have to operate in compliance to multiple legal frameworks, and it is the responsibility of those building them to check the requirements of the many markets they will target to make sure they remain lawful.
The AI boom is has stirred up the legal machinery on every country and economic bloc, because of both real and perceived potential benefits and risks. This means that anyone involved in AI needs to keep their eyes open to potential changes in regulations, as those will be constantly updated based on experience and new developments.
In the end, a Trustworthy AI needs to keep legal compliance, because no one will trust it enough to use it if by doing so you’ll get in legal problems. This means constant reviews and oversight of such compliance, which can become cumbersome and costly, but necessary.
Ethical AI
As History painfully shows us, Lawful doesn’t always means Ethical (think of slavery). Technology moves much faster that laws. Laws and regulations are constantly playing catch up with new developments, and in the meantime, undesirable things happen. We can find tons of examples in our current world.
So it becomes of utmost importance the use of proper Ethics in every aspect of the AI development and utilization. Laws may be slow to come, but Ethical principles will be already there and should guide every decision we take on this field.
When thinking about how to apply Ethics to AI, we must consider the following situations, amongst others:
The purpose itself of why an AI system is being developed (like the design and construction of AI powered Autonomous Weapons Systems)
The development process. It is necessary to make sure the data employed in the AI training is not biased, or the AI behavior will be. This has happened already, like shown on this Technology Review article4.
Also under the development process, the algorithms themselves can be biased, not only the data. Software Developers are human beings. They have biases, that might seep into the code and end up polluting an AI behavior.
The usage process itself. An AI system might be conceived and developed ethically, but then used in an unethical way. For example, using facial recognition AI to create Social Score systems, like the one existing today in China.
Robust AI
When we talk about a Robust AI, we mean it’s been designed and built in a way that ensures no unintentional harm can come from it. This is mostly technical, but it is closely intertwined with the ethical aspect of AI.
The AI purpose, its development process and its usage must be conducted safely, securely and reliably
Foundations of Trustworthy AI
When we talk about having a Trustworthy AI that is Human Centric, upholds the law, it is ethical in nature and of robust design, construction and operation, we need a starting point.
This starting point should be based on Human Rights and concepts that are transversal to all the countries where the AI regulations being developed are expected to be implemented.
Those Fundamental Rights will guide the development of Ethical Guidelines for every aspect of Artificial Intelligence implementations, and in particular we will explore the ones proposed by the European Union through its committees and expert groups.
Fundamental Rights
Let’s review the Fundamental Rights the EU Expert Group on AI has stated as necessary for the development of Trustworthy AI.
Respect for Human Dignity
We humans have an “intrinsic worth”. We are subjects, not objects. The introduction of mainstream AI increases the risk of having governments and corporations treating humans as goods or merchandise to be identified, classified, and assigned a value according to their interests. Special care must be taken to avoid this from happening (as it already happens on countries like China).
Freedom of the Individual
The use of AI should increase the human freedom, not diminish it. Common expectations are that it should empower humans to get more done with less effort, increasing our discretionary time and our productivity. But freedom of choice must be protected: people who doesn’t want to use shouldn’t be forced into it, nor be negatively impacted by that decision.
The other way around is also important: AI should be available to everyone, if they choose to use it. If not, we would be introducing a third type of illiteracy: the AI Illiteracy (so far we already have the normal one, the digital illiteracy and now this).
Respect for Democracy, Justice and the Rule of the Law
The ability of AI to handle enormous amounts of information can lead to it being used to manipulate the population into political and social decisions that might not be in their best interests.
We have already seen examples of this, even without the current Generative AI trend (think of the Cambridge Analytica case). Unlawful and unethical use of AI is something that needs to be carefully monitored.
Equality, Non-Discrimination and Solidarity
Current AI is vulnerable to acquire biases, either by its programming or its training. This is something that needs to be prevented by design and monitored after it enters mainstream use.
AI shouldn’t facilitate discrimination nor exclusion. Think of using AI to sift through millions of personal profiles to determine who should be hired and who shouldn’t. Or through medical records, to determine who should get insurance and who shouldn’t.
Citizens’ Rights
Citizens have a number of rights, according to their own nations. AI should help them to have better access to those rights, instead of increasing friction. Governments can make use of AI to streamline public services and become more efficient.
But also it should be prevented the use of AI to allow governments and totalitarian regimes to exert unethical surveillance and control over its citizens.
Ethical Principles
AI design, development and operation should be guided by strong ethical principles that ensure it is used to benefit humans in general (Human Centric approach).
Those ethical principles can come from many different sources, like fundamental rights charters, constitutions, etc.
In particular, the following ethical principles are based on the European Union’s Treaties and Charter. From those, the European Group on Ethics in Science and New Technologies proposed a set of nine basic principles. And the High Level Expert Group responsible for the AI Ethics derived four from those, that are discussed below.
Respect of Human Autonomy
AI should be developed with a Human Centric approach (it is a tool to help and support us, and not the other way around).
We, as humans, shouldn’t be under AI control and oversight. Our decisions must come first, and we shouldn’t be subject to AI decisions we cannot overrule.
Prevention of Harm
It’s been some time since Isaac Asimov’s “Three Laws of Robotics” came to be. Those should have been labeled instead “Three Laws of AI”, because the robotic behavior will be mandated by its governing AI.
As it’s been repeatedly stated, AI should be deployed to benefit humans, in a Human Centric way.
This means, above all, that AI should not cause harm, or exacerbate it. Human dignity and integrity must be ensured, and by no means it should be used by governments or organizations, in detriment of individuals (like the insurance and Human Resources examples previously provided).
AI technical robustness must also ensure no intended or unintended harm can come from the use of AI.
Fairness
AI should be fair. This means that its design, development/training and operation must not be subject to biases that might place people in unfair situations, purposefully or inadvertently.
This is particularly relevant when considering people with disabilities, minorities, or any kind of disadvantage. AI should provide equal opportunities to all, and its costs and benefits should be distributed with fairness amongst everyone.
And if, for any reason, people are harmed by either AI or someone operating it, they must be able to pursue proper remediation. Accountability in the use of AI will be key to establish trust on it.
Explainability
For some time, Neural Network based AI have been deemed a “black box”, where the answers they provide cannot be fully explained nor traced.
As it was explained in my previous post, this is still the case, because of the complexity of deep neural networks. But intense work is being done to improve the Generative AI Explainability, as this is a key requirement to create a Trustworthy AI.
As mentioned in the Fairness section, accountability is key for the development of trust. And this requires proper traceability of the process followed to arrive to answers and decisions provided by AI.
What to do when there are conflicts between principles
Under some circumstances, it will be unavoidable to find conflict between the stated principles.
Just from Asimov’s “Three Laws of Robotics” we can find one: Human Autonomy can come in conflict with Prevention of Harm, if a Tesla vehicle chooses to override the driver input on the accelerator pedal to avoid a fatal accident where the driver would have run over a sprinting child that he didn’t see, but the car did.
Or most notably, Large Language Model explainability can go against both the model performance and capabilities (not to mention operational costs) as it has to add processing and data complexity to achieve such explainability (and sometimes not achieving it completely). The lack of Explainability can go directly against all of the other principles, like Fairness (inability to trace the source of a bias), Prevention of Harm (cannot reliably ensure it won’t perform harmful actions) or Respect of Human Autonomy.
What comes next
Stay tuned, as I’ll be posting the third part “Realisation of Trustworthy AI” on my next article!
Thank you for reading my publication, and if you consider it is being of help to you, please share it with your friends and coworkers. I write weekly about Technology, Business and Customer Experience, which brings me lately to write a lot also about Artificial Intelligence, because it is permeating everything. Don’t hesitate in subscribing for free to this publication, so you can keep informed on this topic and all the related things I publish here.
As usual, any comments and suggestions you may have, please leave them in the comments area. Let’s start a nice discussion!
When you are ready, here is how I can help:
“Ready yourself for the Future” - Check my FREE instructional video (if you haven’t already)
If you think Artificial Intelligence, Cryptocurrencies, Robotics, etc. will cause businesses go belly up in the next few years… you are right.
Read my FREE guide “7 New Technologies that can wreck your Business like Netflix wrecked Blockbuster” and learn which technologies you must be prepared to adopt, or risk joining Blockbuster, Kodak and the horse carriage into the Ancient History books.
References
Trustworthy Artificial Intelligence (Part 1)
https://alfredozorrilla.substack.com/p/trustworthy-artificial-intelligence-part-1
Ethics guidelines for trustworthy AI
https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
EU AI Act: first regulation on artificial intelligence
Predictive policing algorithms are racist. They need to be dismantled