Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems
The question of the concrete design of a fair and efficient governance framework to ensure responsible technology development and implementation concerns not only high-risk artificial intelligence systems. Everyday applications with a limited ability to inflict harm are also addressed. This article...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Alexander von Humboldt Institute for Internet and Society
2024-09-01
|
| Series: | Internet Policy Review |
| Subjects: | |
| Online Access: | https://policyreview.info/node/1797 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849706735995977728 |
|---|---|
| author | Michael Gille Marina Tropmann-Frick Thorben Schomacker |
| author_facet | Michael Gille Marina Tropmann-Frick Thorben Schomacker |
| author_sort | Michael Gille |
| collection | DOAJ |
| description | The question of the concrete design of a fair and efficient governance framework to ensure responsible technology development and implementation concerns not only high-risk artificial intelligence systems. Everyday applications with a limited ability to inflict harm are also addressed. This article examines the European Union's approach to regulating these non-high-risk systems. We focus on the governance model for these systems established by the Artificial Intelligence Act. Based on a doctrinal legal reconstruction of the rules for codes of conduct and considering the European Union's stated goal of achieving a market-oriented balance between innovation, fundamental rights, and public interest, we explore our topic from three different perspectives: an analysis of specific regulatory components of the governance mechanism is followed by a reflection on ethics and trustworthiness implications of the EU´s approach and concluded by an analysis of a case study from an NLP-based, language-simplifying artificial intelligence application for assistive purposes. |
| format | Article |
| id | doaj-art-3db03b3e4e64486d937e6d47317be805 |
| institution | DOAJ |
| issn | 2197-6775 |
| language | English |
| publishDate | 2024-09-01 |
| publisher | Alexander von Humboldt Institute for Internet and Society |
| record_format | Article |
| series | Internet Policy Review |
| spelling | doaj-art-3db03b3e4e64486d937e6d47317be8052025-08-20T03:16:07ZengAlexander von Humboldt Institute for Internet and SocietyInternet Policy Review2197-67752024-09-0113310.14763/2024.3.1797Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systemsMichael Gille0Marina Tropmann-Frick1Thorben Schomacker2Hamburg University of Applied SciencesHamburg University of Applied SciencesHamburg University of Applied SciencesThe question of the concrete design of a fair and efficient governance framework to ensure responsible technology development and implementation concerns not only high-risk artificial intelligence systems. Everyday applications with a limited ability to inflict harm are also addressed. This article examines the European Union's approach to regulating these non-high-risk systems. We focus on the governance model for these systems established by the Artificial Intelligence Act. Based on a doctrinal legal reconstruction of the rules for codes of conduct and considering the European Union's stated goal of achieving a market-oriented balance between innovation, fundamental rights, and public interest, we explore our topic from three different perspectives: an analysis of specific regulatory components of the governance mechanism is followed by a reflection on ethics and trustworthiness implications of the EU´s approach and concluded by an analysis of a case study from an NLP-based, language-simplifying artificial intelligence application for assistive purposes.https://policyreview.info/node/1797Artificial intelligenceCo-regulationSelf-regulationCodes of conductAI Act |
| spellingShingle | Michael Gille Marina Tropmann-Frick Thorben Schomacker Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems Internet Policy Review Artificial intelligence Co-regulation Self-regulation Codes of conduct AI Act |
| title | Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems |
| title_full | Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems |
| title_fullStr | Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems |
| title_full_unstemmed | Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems |
| title_short | Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems |
| title_sort | balancing public interest fundamental rights and innovation the eu s governance model for non high risk ai systems |
| topic | Artificial intelligence Co-regulation Self-regulation Codes of conduct AI Act |
| url | https://policyreview.info/node/1797 |
| work_keys_str_mv | AT michaelgille balancingpublicinterestfundamentalrightsandinnovationtheeusgovernancemodelfornonhighriskaisystems AT marinatropmannfrick balancingpublicinterestfundamentalrightsandinnovationtheeusgovernancemodelfornonhighriskaisystems AT thorbenschomacker balancingpublicinterestfundamentalrightsandinnovationtheeusgovernancemodelfornonhighriskaisystems |