Getting democracy wrong
Recent developments in large language models and computer automated systems more generally (colloquially called ‘artificial intelligence’) have given rise to concerns about potential social risks of AI. Of the numerous industry-driven principles put forth over the past decade to address these conc...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
DIGSUM
2024-12-01
|
Series: | Journal of Digital Social Research |
Subjects: | |
Online Access: | https://publicera.kb.se/jdsr/article/view/40477 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1846098188819234816 |
---|---|
author | Gwendolyn Blue Mél Hogan |
author_facet | Gwendolyn Blue Mél Hogan |
author_sort | Gwendolyn Blue |
collection | DOAJ |
description |
Recent developments in large language models and computer automated systems more generally (colloquially called ‘artificial intelligence’) have given rise to concerns about potential social risks of AI. Of the numerous industry-driven principles put forth over the past decade to address these concerns, the Future of Life Institute’s Asilomar AI principles are particularly noteworthy given the large number of wealthy and powerful signatories. This paper highlights the need for critical examination of the Asilomar AI Principles. The Asilomar model, first developed for biotechnology, is frequently cited as a successful policy approach for promoting expert consensus and containing public controversy. Situating Asilomar AI principles in the context of a broader history of Asilomar approaches illuminates the limitations of scientific and industry self-regulation. The Asilomar AI process shapes AI’s publicity in three interconnected ways: as an agenda-setting manoeuvre to promote longtermist beliefs; as an approach to policy making that restricts public engagement; and as a mechanism to enhance industry control of AI governance.
|
format | Article |
id | doaj-art-6061ec6ad1c648a299e9980c0a1a061f |
institution | Kabale University |
issn | 2003-1998 |
language | English |
publishDate | 2024-12-01 |
publisher | DIGSUM |
record_format | Article |
series | Journal of Digital Social Research |
spelling | doaj-art-6061ec6ad1c648a299e9980c0a1a061f2025-01-02T01:40:08ZengDIGSUMJournal of Digital Social Research2003-19982024-12-016410.33621/jdsr.v6i440477Getting democracy wrongGwendolyn Blue0https://orcid.org/0000-0003-3510-3248Mél Hoganhttps://orcid.org/0000-0003-2786-5998University of Calgary Recent developments in large language models and computer automated systems more generally (colloquially called ‘artificial intelligence’) have given rise to concerns about potential social risks of AI. Of the numerous industry-driven principles put forth over the past decade to address these concerns, the Future of Life Institute’s Asilomar AI principles are particularly noteworthy given the large number of wealthy and powerful signatories. This paper highlights the need for critical examination of the Asilomar AI Principles. The Asilomar model, first developed for biotechnology, is frequently cited as a successful policy approach for promoting expert consensus and containing public controversy. Situating Asilomar AI principles in the context of a broader history of Asilomar approaches illuminates the limitations of scientific and industry self-regulation. The Asilomar AI process shapes AI’s publicity in three interconnected ways: as an agenda-setting manoeuvre to promote longtermist beliefs; as an approach to policy making that restricts public engagement; and as a mechanism to enhance industry control of AI governance. https://publicera.kb.se/jdsr/article/view/40477principlesAsilomargovernanceArtificial intelligencebiotechnologylongtermism |
spellingShingle | Gwendolyn Blue Mél Hogan Getting democracy wrong Journal of Digital Social Research principles Asilomar governance Artificial intelligence biotechnology longtermism |
title | Getting democracy wrong |
title_full | Getting democracy wrong |
title_fullStr | Getting democracy wrong |
title_full_unstemmed | Getting democracy wrong |
title_short | Getting democracy wrong |
title_sort | getting democracy wrong |
topic | principles Asilomar governance Artificial intelligence biotechnology longtermism |
url | https://publicera.kb.se/jdsr/article/view/40477 |
work_keys_str_mv | AT gwendolynblue gettingdemocracywrong AT melhogan gettingdemocracywrong |