High-risk AI transparency? On qualified transparency mandates for oversight bodies under the EU AI Act

The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the...

Full description

Saved in:
Bibliographic Details
Main Author: Kasia Söderlund
Format: Article
Language:English
Published: openjournals.nl 2025-06-01
Series:Technology and Regulation
Subjects:
Online Access:https://techreg.org/article/view/19876
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the confidentiality constraints. With the introduction of the EU AI Act, the foundations for human-centric and trustworthy AI have been established. The framework sets regulatory requirements for certain AI technologies and grants oversight bodies broad transparency mandates to enforce the new rules. This paper examines these transparency mandates under the AI Act and argues that it effectively implements qualified transparency, which may potentially mitigate the problem of AI opacity. Nevertheless, several challenges remain in achieving the Act’s policy objectives.
ISSN:2666-139X