An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics
The description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flaw...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Laws |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2075-471X/14/2/19 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849713781817475072 |
|---|---|
| author | Ikpenmosa Uhumuavbi |
| author_facet | Ikpenmosa Uhumuavbi |
| author_sort | Ikpenmosa Uhumuavbi |
| collection | DOAJ |
| description | The description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flawed characterisation of AI and the unearned assumptions that are central to its current definition, characterisation, and efforts at controlling it. The contradictions in the framing of AI have been the bane of the incapacity to regulate it. A revival of applied definitional framing of AI across disciplines have produced a plethora of conceptions and inconclusiveness. Therefore, the research advances this position with two fundamental and interrelated arguments. First, the difficulty in regulating AI is tied to it characterisation as artificial intelligence. This has triggered existing and new conflicting notions of the meaning of ‘artificial’ and ‘intelligence’, which are broad and largely unsettled. Second, difficulties in developing a global consensus on responsible AI stem from this inconclusiveness. To advance these arguments, this paper utilises functional contextualism to analyse the fundamental nature and architecture of artificial intelligence and human intelligence. There is a need to establish a test for ‘artificial intelligence’ in order ensure appropriate allocation of rights, duties, and responsibilities. Therefore, this research proposes, develops, and recommends an adaptive three-elements, three-step threshold for achieving responsible artificial intelligence. |
| format | Article |
| id | doaj-art-bba5167e9f7d4754883a2d26e00ad2c3 |
| institution | DOAJ |
| issn | 2075-471X |
| language | English |
| publishDate | 2025-03-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Laws |
| spelling | doaj-art-bba5167e9f7d4754883a2d26e00ad2c32025-08-20T03:13:52ZengMDPI AGLaws2075-471X2025-03-011421910.3390/laws14020019An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and EthicsIkpenmosa Uhumuavbi0Central Asian Legal Research Fellow, Tashkent State University of Law, Sayilgokh Street 35, Tashkent 100047, UzbekistanThe description of a combination of technologies as ‘artificial intelligence’ (AI) is misleading. To ascribe intelligence to a statistical model without human attribution points towards an attempt at shifting legal, social, and ethical responsibilities to machines. This paper exposes the deeply flawed characterisation of AI and the unearned assumptions that are central to its current definition, characterisation, and efforts at controlling it. The contradictions in the framing of AI have been the bane of the incapacity to regulate it. A revival of applied definitional framing of AI across disciplines have produced a plethora of conceptions and inconclusiveness. Therefore, the research advances this position with two fundamental and interrelated arguments. First, the difficulty in regulating AI is tied to it characterisation as artificial intelligence. This has triggered existing and new conflicting notions of the meaning of ‘artificial’ and ‘intelligence’, which are broad and largely unsettled. Second, difficulties in developing a global consensus on responsible AI stem from this inconclusiveness. To advance these arguments, this paper utilises functional contextualism to analyse the fundamental nature and architecture of artificial intelligence and human intelligence. There is a need to establish a test for ‘artificial intelligence’ in order ensure appropriate allocation of rights, duties, and responsibilities. Therefore, this research proposes, develops, and recommends an adaptive three-elements, three-step threshold for achieving responsible artificial intelligence.https://www.mdpi.com/2075-471X/14/2/19artificial intelligencelawregulationsethicsAI conceptualisationresponsible AI |
| spellingShingle | Ikpenmosa Uhumuavbi An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics Laws artificial intelligence law regulations ethics AI conceptualisation responsible AI |
| title | An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics |
| title_full | An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics |
| title_fullStr | An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics |
| title_full_unstemmed | An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics |
| title_short | An Adaptive Conceptualisation of Artificial Intelligence and the Law, Regulation and Ethics |
| title_sort | adaptive conceptualisation of artificial intelligence and the law regulation and ethics |
| topic | artificial intelligence law regulations ethics AI conceptualisation responsible AI |
| url | https://www.mdpi.com/2075-471X/14/2/19 |
| work_keys_str_mv | AT ikpenmosauhumuavbi anadaptiveconceptualisationofartificialintelligenceandthelawregulationandethics AT ikpenmosauhumuavbi adaptiveconceptualisationofartificialintelligenceandthelawregulationandethics |