EU AI Act

AI-Tasks.de - deine Info-Quelle für KI/AI-News

Introduction to the New Era of AI Legislation

The recent adoption of the AI Act marks a turning point in the regulation of artificial intelligence (AI). This law forms a comprehensive foundation for dealing with AI technologies, particularly in terms of ethical and societal implications. We delve into the details of this legislation and discuss its implications for the future of AI.

Risk-Based AI Classification: An Overview

The AI Act introduces a tiered system that classifies AI systems according to their potential risk. This classification aims to responsibly shape the use of AI in critical areas and minimize risks.

Unacceptable Risk (Prohibited AI)

AI systems considered a clear threat to human safety and rights are banned. This includes systems used for social scoring by governments or toys that use voice assistance to promote dangerous behavior.

High-Risk AI Systems

High-risk AI systems are further divided into two main categories:

  • AI Systems as Products or Safety Components: This category refers to AI systems that are either products themselves or integral safety components within products. Examples include AI-driven toys, vehicles, medical devices, and elevator systems.
  • AI Systems in Specified High-Risk Areas: These encompass AI applications in areas posing significant risks to health, safety, or fundamental rights, such as in the transportation or energy sector, biometric identification, and law enforcement.

Limited Risk

AI systems with limited risk, like chatbots, require specific transparency obligations to ensure users are aware they are interacting with a machine.

Minimal or No Risk

The proposal allows for the free use of AI with minimal risk, such as AI-enabled video games or spam filters. The majority of AI systems currently deployed in the EU fall into this category.

Prohibited AI Applications

The AI Act lists specific AI applications that are now banned, including:

  • Biometric categorization systems that could be used for discrimination.
  • AI systems that collect facial images from social media or surveillance cameras to create extensive surveillance databases.
  • Emotion recognition technologies for monitoring employee mood.
  • Social scoring systems that evaluate citizens’ social behavior.
  • AI systems influencing purchasing decisions or voting behavior.

Requirements for High-Risk AI Systems

High-risk AI systems must meet stringent obligations before market launch, including:

  • Appropriate risk assessment and mitigation systems.
  • High quality of datasets to minimize risks and discriminatory outcomes.
  • Activity logging for traceability.
  • Detailed documentation about the system and its purpose.
  • Clear and adequate user information.
  • Adequate human oversight measures.
  • High robustness, security, and accuracy.

All biometric remote identification systems are considered high-risk and are subject to strict requirements. Their use in publicly accessible spaces for law enforcement purposes is generally prohibited, with narrowly defined exceptions.

Regulation of Foundational Models

A key component of the law is the regulation of so-called foundational models. The threshold of 10^25 FLOPs mentioned in the AI Act represents a very high hurdle that includes only the largest and most computationally intensive models. For comparison, GPT-3 required about 3.14×10^23 FLOPs for its training, which is significantly below this threshold. This means that the regulation aims to control only those models that could potentially have significant impacts on society due to their size and complexity.

Transparency Requirements and Bias Management

Developers of high-risk AI systems must make their approaches to bias mitigation and non-discrimination transparent. This includes training models with diverse and representative datasets to avoid systematic biases and implementing algorithms that actively detect and prevent discrimination.

Documentation Duty and Human Oversight

Providers of high-risk AI systems must maintain extensive documentation. This regulation aims to minimize risks such as unforeseen AI decisions, misinterpretations of data, or unintentional discrimination. Human oversight ensures AI decisions are comprehensible and in line with ethical standards.

Sanctions and Financial Risks

Non-compliance can lead to substantial fines, up to 7% of a company’s global turnover. These sanctions underscore the importance of complying with the new regulations.

Labeling Obligation of AI Content

The EU legislation includes specific rules for labeling and transparency in the use of certain AI systems. These regulations are anchored in Title IV of the legislation (Transparency Obligations for Certain AI Systems) and relate to specific manipulation risks of certain AI systems.

The key points are:

Situations in which users must be informed:

  • When users interact with AI systems.
  • When using AI systems for the detection of emotions or for the assignment of social categories based on biometric data.
  • When AI systems are used to generate or manipulate image, audio, or video content, especially in the case of “Deepfakes” – content that has been manipulated in such a way that it is barely distinguishable from authentic content.

Nature of the information obligation:

  • Individuals must be informed that they are interacting with AI systems or that their emotions or characteristics are being recognized through automated means.

These regulations aim to increase transparency and inform users about their interaction with AI systems or their analysis by such systems. The goal is to create awareness of the use and potential risks of AI technologies and to ensure that users are adequately informed when they interact with such technologies or are analyzed by them.

Article 52

Transparency obligations for certain AI systems

  1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
  2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
  3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
    However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
  4. Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.
REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE
(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION
LEGISLATIVE ACTS, Final Version, p. 69.

Impacts on Businesses and the AI Industry

Companies invested in now-prohibited technologies face challenges in adjusting their strategies. Increased transparency requirements could impact the protection of intellectual property. Investments in higher-quality data and advanced bias management tools could increase operational costs but lead to fairer and higher-quality AI systems.

Conclusion: A Step Towards Responsible AI

The AI Act represents a significant step towards more responsible use of AI. It offers a legal framework that protects against risks while promoting innovative technologies. For the AI industry, this means an adjustment phase but also an opportunity to develop AI technologies in an ethically and socially responsible manner.

Further Information:

Proposal for a legal framework for artificial intelligence – https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Training and further education in the field of AI Act & AI Safety – https://www.iks.fraunhofer.de/en/services/ai-law-high-risk-systems.html

The law, introduced in 2021 and now finally passed, can be viewed here – https://artificialintelligenceact.eu/de/das-gesetz/ (Backup PDF EU AI ACT)

The EU politician briefing can be found here – https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

Leave a comment

Your email address will not be published. Required fields are marked *