Articles

7 Ethical Principles for Trustworthy Artificial Intelligence

By Dr. Nathalie DEVILLIER, Doctor of International Law | Professor of AI Law and Ethics at aivancity

To ensure trustworthy AI, seven fundamental ethical principles must be applied and evaluated throughout the entire lifecycle of the AI system. These requirements are interconnected, equally important, and mutually reinforcing. While they are not exhaustive, they encompass systemic, individual, and societal aspects.

These seven fundamental principles are reflected in the European AI Regulation and are drawn from the European Commission’s 2019 Guidelines.

  1. Human action and human control. This principle states that AI systems should support human autonomy and decision-making, not undermine it. AI must enable individuals to maintain adequate control and oversight over systems, respecting fundamental rights and ensuring a democratic and equitable society. This implies that users can understand the system, challenge its decisions, and intervene if necessary. Approaches such as “human-in-the-loop” or “human-on-the-loop” are recommended, where the human remains in a position of control.

For example, in the development of autonomous vehicles, the human driver’s ability to regain control at any time (for example, in the event of dangerous road conditions or AI failure) embodies this principle.

  1. Technical robustness and security. A trustworthy AI system must be resilient to attacks, secure, reliable, reproducible, and accurate. It must be able to handle errors and failures, as well as malicious or unintended use. Correct predictions and reliable results are essential to avoid negative consequences.

For example, an AI system used to manage a power plant must be designed to be resilient to cyberattacks and technical failures in order to ensure safe operations and a stable supply.

  1. Confidentiality and data governance. This principle requires respect for privacy, data quality and integrity, and secure access to data. Data used by AI must be relevant, accurate, valid, and reliable. Users must be informed about the collection and use of their data and be able to access and correct it.

Thus, the application of the General Data Protection Regulation (GDPR) to an AI system for urban surveillance—which requires data minimization, anonymization, and transparency regarding its use—is a clear example of this.

  1. Transparency encompasses traceability, explainability, and communication. AI systems must allow users to understand how their decisions are made and what factors are taken into account. Even for complex systems, some form of explainability is necessary to understand their behavior. Clear communication about the AI system’s capabilities and limitations is also essential.

The fact that an AI-based bank loan decision algorithm provides a clear and detailed explanation of the reasons for a loan denial, rather than a simple “no,” demonstrates transparency.

  1. Diversity, non-discrimination, and equity. This principle aims to ensure that AI systems are developed and used without unfair bias, are accessible to all, and are inclusive (including people with disabilities) and involve stakeholder participation. The aim is to prevent and reduce bias in data and algorithms to ensure fair and non-discriminatory results.

An example of the application of this principle is the revision of a recruitment AI tool that was found to be sexist due to biases in the training data, in order to make it fairer for all candidates.

  1. Societal and environmental well-being. AI systems must contribute positively to societal well-being, sustainability, and environmental protection, while upholding social and democratic values. This includes minimizing adverse incidents and taking into account the environmental impact of AI, such as its energy consumption. Using AI to optimize urban traffic flows and reduce congestion—thereby reducing pollution and improving the quality of life in cities—contributes to societal and environmental well-being.
  1. Accountability. This principle entails auditability, the minimization of adverse events, and clear communication of responsibilities and remedies. Mechanisms must be in place to ensure autonomy and accountability. The traceability of AI operations and the possibility of external audits are crucial. Roles and responsibilities must be defined, and recourse mechanisms (e.g., for complaints) must be available. In the event of a serious error in a medical diagnostic AI system, the ability to audit the algorithm, its training data, and decisions, and to clearly identify who is responsible (developer, hospital, doctor) and how patients can obtain redress, illustrates the principle of accountability.

These principles are essential to ensure that AI is developed and used ethically and for the benefit of society.

When implementing ethical requirements for AI systems, conflicts may arise between different principles, making certain trade-offs unavoidable. These decisions must be made in a reasoned and transparent manner, based on current technical knowledge, and taking into account the risks to fundamental rights.

If no ethically acceptable arbitration is possible, the system must not be used as it stands. Decisions must be documented, regularly reviewed, and those responsible held accountable.

In the event of unfair adverse effects, accessible remedies must be provided, with particular attention to vulnerable individuals.

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Related posts
Articles

Less certainty, more awareness: AI is blazing a trail that schools are afraid to follow

Just imagine. You ask a question of a state-of-the-art artificial intelligence system, packed with artificial neurons, fed a massive amount of data, and more connected than your teenager on a Saturday night, and it answers you, without batting an eye: “I don’t know.”
Articles

Why is everyone talking about agents?

Since the start of the AI revolution—let’s say with the release of ChatGPT-3—the capabilities of large language models (LLMs) have been advancing rapidly. Let’s put this into perspective
Articles

Europe, Wake Up! Your AI Skills Gap Could Cost You Your Sovereignty

March 2025. In the hushed atmosphere of a London conference center, a hundred European leaders—including government officials, business executives, union representatives, and digital experts—scrutinize the slides of a newly released report by the consulting firm Forrester.
The AI Clinic

Would you like to submit a project to the AI Clinic and work with our students?

Leave a comment

Your email address will not be published. Required fields are marked with *