Risk Based Approach in New European AI Act

Last weekend, the EU law makers reached an agreement on new legislation for the use of artificial intelligence (AI) in the European Union.  The proposed new EU AI Act will take a risk-based approach.

Last weekend, the EU law makers reached an agreement on new legislation for the use of artificial intelligence (AI) in the European Union.  The proposed new EU AI Act will take a risk-based approach.

Legal text proposal in January 24

It was agreed on by the European Parliament, the Council of Ministers and the European Commission in the so called trilogue negotiations. The final text, translating the political consensus, is expected in January 2024 and will be submitted to the Parliament and the Council for final approval. Ultimately, both the Parliament and Council still have to formally adopt the text.

Broad definition of AI

The AI Act adopts a broad definition of AI systems. It encompasses machine-based systems that generate outputs such as predictions, content, recommendations, or decisions, influencing both physical and virtual environments. All AI systems, irrespective of their risk level, are subject to basic transparency obligations to ensure a minimal level of clarity and understanding of AI functionalities and their implications.

Prohibited use of AI 

The AI Act will prohibit certain uses of AI such as biometric categorisation systems that use sensitive characteristics, e.g. political, religious, philosophical beliefs, sexual orientation, or race. Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases will be banned. The same goes for emotion recognition systems in the workplace and in educational institutions. Social scoring based on social behaviour or personal characteristics is also forbidden. AI systems that manipulate human behaviour will are not allowed. AI used to exploit the vulnerabilities of people, due to their age, disability, social or economic situation, are also prohibited.

General Purpose AI

The AI Act will impose requirements on general-purpose AI and on high-risk AI. New transparency requirements will be imposed on general-purpose AI systems and on the foundation models they are based on, i.a. providing technical documentation, compliance with copyright law and being transparent about the training data of the model. Providers of foundation models must give a detailed summary of the training data irrespective of where the training has taken place.

High Risk AI

Some use cases are considered as high risk because of their significant potential to harm people's safety and fundamental rights. These areas include education, employment, critical infrastructure, public services, law enforcement, border control, and the administration of justice.

A wide range of high-risk AI systems will be authorised, but subject to a set of requirements and obligations to gain access to the European market. General-purpose AI models with systemic risk must conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the European Commission on serious incidents, ensure cybersecurity and report on their energy efficiency. High-risk AI use will also be subject to a mandatory fundamental rights impact assessment and to other requirements i.a. about data quality.

Limitations and exclusions

Under certain conditions law enforcement agencies will be allowed to use AI-based biometric identification systems in public spaces. If AI systems are used for the sole purpose of research and innovation, for military or defence purposes, or for non-professional reasons, then the new rules will in principle not apply.

Fines

Use of banned AI applications can be fined with amounts up to €35 million or 7% of the annual global turnover. For breaching other obligations this could be fines of up to €15 million or 3% of annual global turnover and €7,5 million or 1,5% for providing incorrect information.

Timing

The AI Act is expected to apply two years after it has come into force, but the bans on prohibited AI will already take effect six months after enactment. The transition period is one year for foundation models and AI systems for general use, and two years for other AI systems.

Author: Edwin Jacobs

More Partner Blogs


26 april 2024

Which companies have the obligation to introduce an internal reporting channel for whistleblowers?

The European Whistleblower Directive was transposed into Belgian legislation end of 2022 (Act of...

Lees meer...

25 april 2024

A new European Commission proposal on foreign direct investment screening: towards greater harmonization?

On June 20, 2023, the European Commission and the High Representative for Foreign Affairs and...

Lees meer...

23 april 2024

Leverage Legal Tech to set your legal department’s KPIs strategy

Leverage technology in your legal department to elevate your team's efficiency and strategic impact.

Lees meer...

22 april 2024

Considerations when contracting about AI-sytems

With the recent approval of the AI Act by the European Parliament in mid-March, it is crucial to...

Lees meer...

19 april 2024

Drowning in Data? Tactics for Legal Professionals to Conquer the Information Overload

Welcome to the exciting world of increasing laws and regulations, where each choice proves how...

Lees meer...