Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI
This paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA). The AIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB, i.e., the human tendency to over-rely on AI outputs. The...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Cambridge University Press
|
| Series: | European Journal of Risk Regulation |
| Subjects: | |
| Online Access: | https://www.cambridge.org/core/product/identifier/S1867299X25100330/type/journal_article |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849233000032632832 |
|---|---|
| author | Johann Laux Hannah Ruschemeier |
| author_facet | Johann Laux Hannah Ruschemeier |
| author_sort | Johann Laux |
| collection | DOAJ |
| description | This paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA). The AIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB, i.e., the human tendency to over-rely on AI outputs. The paper analyses the embedding of this extra-juridical concept in the AIA, the asymmetric division of responsibility between AI providers and deployers for mitigating AB, and the challenges of legally enforcing this novel awareness requirement. The analysis shows that the AIA’s focus on providers does not adequately address design and context as causes of AB, and questions whether the AIA should directly regulate the risk of AB rather than just mandating awareness. As the AIA’s approach requires a balance between legal mandates and behavioural science, the paper proposes that harmonised standards should reference the state of research on AB and human-AI interaction, holding both providers and deployers accountable. Ultimately, further empirical research on human-AI interaction will be essential for effective safeguards. |
| format | Article |
| id | doaj-art-bd9dce9e2aa34a4aa4708a8c68267bed |
| institution | Kabale University |
| issn | 1867-299X 2190-8249 |
| language | English |
| publisher | Cambridge University Press |
| record_format | Article |
| series | European Journal of Risk Regulation |
| spelling | doaj-art-bd9dce9e2aa34a4aa4708a8c68267bed2025-08-20T13:13:06ZengCambridge University PressEuropean Journal of Risk Regulation1867-299X2190-824911610.1017/err.2025.10033Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AIJohann Laux0https://orcid.org/0000-0003-3043-075XHannah Ruschemeier1https://orcid.org/0000-0003-3455-3271Oxford Internet Institute, University of Oxford, UKFaculty of Law, University of Hagen, GermanyThis paper examines the legal implications of the explicit mentioning of automation bias (AB) in the Artificial Intelligence Act (AIA). The AIA mandates human oversight for high-risk AI systems and requires providers to enable awareness of AB, i.e., the human tendency to over-rely on AI outputs. The paper analyses the embedding of this extra-juridical concept in the AIA, the asymmetric division of responsibility between AI providers and deployers for mitigating AB, and the challenges of legally enforcing this novel awareness requirement. The analysis shows that the AIA’s focus on providers does not adequately address design and context as causes of AB, and questions whether the AIA should directly regulate the risk of AB rather than just mandating awareness. As the AIA’s approach requires a balance between legal mandates and behavioural science, the paper proposes that harmonised standards should reference the state of research on AB and human-AI interaction, holding both providers and deployers accountable. Ultimately, further empirical research on human-AI interaction will be essential for effective safeguards.https://www.cambridge.org/core/product/identifier/S1867299X25100330/type/journal_articleAI Act (AIA)AI regulationautomation bias (AB)human oversightGeneral Data Protection Regulation (GDPR) |
| spellingShingle | Johann Laux Hannah Ruschemeier Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI European Journal of Risk Regulation AI Act (AIA) AI regulation automation bias (AB) human oversight General Data Protection Regulation (GDPR) |
| title | Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI |
| title_full | Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI |
| title_fullStr | Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI |
| title_full_unstemmed | Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI |
| title_short | Automation Bias in the AI Act: On the Legal Implications of Attempting to De-Bias Human Oversight of AI |
| title_sort | automation bias in the ai act on the legal implications of attempting to de bias human oversight of ai |
| topic | AI Act (AIA) AI regulation automation bias (AB) human oversight General Data Protection Regulation (GDPR) |
| url | https://www.cambridge.org/core/product/identifier/S1867299X25100330/type/journal_article |
| work_keys_str_mv | AT johannlaux automationbiasintheaiactonthelegalimplicationsofattemptingtodebiashumanoversightofai AT hannahruschemeier automationbiasintheaiactonthelegalimplicationsofattemptingtodebiashumanoversightofai |