Site icon aivancity blog

Brussels Proposes Regulation to Govern AI – Act I

Business, Corporate, Protection, Safety, and Security Concept

Maud Lambert

Maud Lambert is a member of the Paris Bar. As a partner at Smalt Avocats, she heads the firm’s Technology, Personal Data, and E-health practice.
She is also an adjunct professor at aivancity school for technology, business & society

On April 21, 2021, the European Commission presented a proposal for a regulation aimed at establishing a framework for artificial intelligence systems.

This text constitutes the very first legal framework for AI in Europe.

Its goal is to create an ecosystem of trust designed to instill in citizens the confidence needed to adopt AI applications and to provide businesses and public agencies with the legal certainty to innovate using AI.

This text is in line with the European Commission’s recent efforts to advance digital development through the proposals for the Digital Markets Act (DMA), the Digital Services Act (DSA), the Data Governance Act, and the Open Data Directive.

The European Commission has opted for a text in the form of a “regulation” that, once adopted, will be directly applicable in all Member States, with the aim of establishing uniform standards throughout the Union.

WHAT ARE THE GOALS?

This regulation has several objectives:

To achieve these objectives, the Commission proposes a balanced and proportionate regulatory approach to AI, which is limited to establishing the minimum requirements necessary to address the risks and challenges associated with AI without constraining or hindering technological development or disproportionately increasing the cost of bringing AI solutions to market.

The text of the proposal is based on discussions and analyses conducted with all stakeholders, in particular the responses to the extensive public consultation launched following the European Commission’s publication of the White Paper on AI in February 2020 (White Paper on Artificial Intelligence, COM(2020) 65 final) and the work carried out since 2018 by the High Level Expert Group on Artificial Intelligence (AI HLEG: a group comprising 52 experts, including researchers, academics, representatives of civil society, consumer advocacy groups, as well as corporate employees, policy advisors, and legal experts), which notably developed ethical guidelines—Ethical Guidelines for Trustworthy AI—(on fairness, safety, and transparency), the future of work, and, more broadly, the impact on the respect for fundamental rights, particularly regarding privacy and the protection of personal data.

WHAT DOES THE TEXT ACTUALLY SAY?

The proposed legal framework for AI is based on a risk-based approach to AI systems.

The greater the risks, the more restrictive the rules governing market use.

Diagram taken from the European Commission's website

A mechanism for market monitoring and system evaluation to ensure compliance, as well as an incident reporting procedure, is also planned.

The draft regulation establishes a framework to support innovation by encouraging national authorities to set up regulatory “sandboxes” for AI, designed to test innovative technologies in a controlled environment for a limited period of time and to reduce the regulatory burden on SMEs and startups.

Finally, the legislation provides for heavy penalties for non-compliance with these new requirements, ranging up to 20 million euros or 6% of revenue.

The proposed new rules will apply to both public sector entities and companies, regardless of their size, both within and outside the European Union, for any AI system placed on the market in the EU or whose use has an “impact” on individuals located in the EU. Affected U.S. and Chinese companies will therefore also have to comply with this regulation

WHEN MIGHT THIS PROVISION APPLY?

From a timeline perspective, it will likely take a few more years for this legislation to be adopted.

It will, in fact, have to be reviewed by the European Parliament and then by the Council until a compromise text is reached. By comparison, the GDPR was adopted after four years of discussion.

In addition, as with the GDPR, the proposal provides for a 24-month transition period following its adoption to allow stakeholders to comply before it takes effect in the Member States.

Stay tuned! ​

AND ABROAD?

Abroad, the regulatory framework for AI is still in its infancy. ​

For example, in the United States, a few laws have been enacted to regulate specific aspects of artificial intelligence, such as facial recognition and transparency regarding connected devices, but there is no overarching framework. ​

China plans to establish a legal, ethical, and strategic framework for artificial intelligence by 2025. ​

In Canada, the Office of the Privacy Commissioner of Canada released its recommendations for a regulatory framework for artificial intelligence last November.

Exit mobile version