If Artificial Superintelligence Were to Cause Our Extinction, Would That Be So Bad?

This article examines whether human extinction brought about by a “value-misaligned” artificial superintelligence (ASI) would be bad, and for what reasons. The question, I contend, is deceptively complex. I proceed by outlining the three main positions within Existential Ethics, i.e., the study of t...

Full description

Saved in:
Bibliographic Details
Main Author: Émile P. Torres
Format: Article
Language:English
Published: Programmes de bioéthique, École de santé publique de l'Université de Montréal 2025-07-01
Series:Canadian Journal of Bioethics
Subjects:
Online Access:https://cjb-rcb.ca/index.php/cjb-rcb/article/view/782
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article examines whether human extinction brought about by a “value-misaligned” artificial superintelligence (ASI) would be bad, and for what reasons. The question, I contend, is deceptively complex. I proceed by outlining the three main positions within Existential Ethics, i.e., the study of the ethical and evaluative implications of human extinction. These are equivalence views, further-loss views, and pro-extinctionist views. I then show how exponents of each view would evaluate a scenario in which humanity goes extinct due to ASI. Although there are some points of agreement, these three positions diverge in significant ways, most of which have not been adequately explored in the philosophical literature.
ISSN:2561-4665