To achieve trustworthy AI, seven fundamental ethical principles must be applied and evaluated throughout the AI system’s lifecycle. These requirements are interconnected, equally important, and mutually reinforcing. While not exhaustive, they encompass systemic, individual, and societal aspects.
These seven fundamental principles are reflected in the EU AI Regulation and are based on the European Commission’s 2019 Guidelines.
- Human agency and human control. This principle stipulates that AI systems must support human autonomy and decision-making, rather than undermine them. AI must enable individuals to maintain adequate control and oversight over systems, while respecting fundamental rights and ensuring a democratic and equitable society. This means that users must be able to understand the system, challenge its decisions, and intervene when necessary. Approaches such as “human-in-the-loop” or “human-on-the-loop” are recommended, where humans remain in control.
For example, in the development of autonomous vehicles, the human driver’s ability to take back control at any time (for instance, in the event of hazardous road conditions or an AI failure) embodies this principle.
- Technical robustness and security. A trustworthy AI system must be resilient to attacks, secure, reliable, reproducible, and accurate. It must be able to handle errors and failures, as well as malicious or unintended uses. Accurate predictions and reliable results are essential to avoid negative consequences.
For example, an AI system used to manage a power plant must be designed to be resilient to cyberattacks and technical failures, in order to ensure operational safety and the stability of the power supply.
- Privacy and data governance. This principle requires respect for privacy, data quality and integrity, and secure access to data. Data used by AI must be relevant, accurate, valid, and reliable. Users must be informed about the collection and use of their data and have the ability to access and correct that data.
For example, the application of the General Data Protection Regulation (GDPR) to an AI-based urban surveillance system—which requires data minimization, anonymization, and transparency regarding data use—is a clear illustration of this.
- Transparency encompasses traceability, explainability, and communication. AI systems must allow users to understand how their decisions are made and what factors are taken into account. Even for complex systems, some form of explainability is necessary to understand their behavior. Clear communication about the capabilities and limitations of the AI system is also essential.
The fact that an AI-based bank loan decision algorithm provides a clear and detailed explanation of the reasons for a loan denial, rather than simply saying “no,” demonstrates transparency.
- Diversity, non-discrimination, and equity. This principle aims to ensure that AI systems are developed and used without unfair biases, that they are accessible to all, and inclusive (including people with disabilities), and that they involve stakeholder participation. The goal is to prevent and reduce bias in data and algorithms to ensure fair and non-discriminatory outcomes.
Revising a recruitment AI tool that was found to be sexist due to biases in the training data, in order to make it more equitable for all candidates, is an example of how this principle is applied.
- Social and environmental well-being. AI systems must contribute positively to social well-being, sustainability, and environmental protection, while respecting social and democratic values. This includes minimizing negative incidents and taking into account the environmental impact of AI, such as its energy consumption. The use of AI to optimize urban traffic flows and reduce congestion, thereby lowering pollution and improving the quality of life in cities, contributes to societal and environmental well-being.
- Accountability. This principle entails auditability, minimizing adverse incidents, and clear communication regarding responsibilities and recourse. Mechanisms must be in place to ensure autonomy and accountability. The traceability of AI operations and the possibility of external audits are crucial. Roles and responsibilities must be defined, and recourse mechanisms (e.g., for complaints) must be available. In the event of a serious error by a medical diagnostic AI system, the ability to audit the algorithm, its training data, and its decisions, and to clearly identify who is responsible (developer, hospital, physician) and how patients can obtain redress, exemplifies the principle of accountability.
These principles are essential to ensuring that AI is developed and used in an ethical manner that benefits society.
How can these principles be put into practice?
When implementing ethical requirements for AI systems, conflicts may arise between different principles, making certain trade-offs unavoidable. These decisions must be made in a reasoned and transparent manner, based on current technical knowledge, and by assessing the risks to fundamental rights.
If no ethically acceptable compromise is possible, the system should not be used as is. Decisions must be documented, reviewed regularly, and those responsible must be held accountable.
In the event of an unfair adverse impact, accessible recourse mechanisms must be provided, with particular attention paid to vulnerable individuals.

