Advanced artificial intelligence at a corporate responsibility crossroads: employees as risk management advocates

Purpose – The purpose of this study is to highlight how tech industry employees and artificial intelligence (AI) scientists are expressing concerns that AI companies have too great financial incentives to avoid effective self-regulating oversight, and that current corporate governance structures can...

Full description

Saved in:
Bibliographic Details
Main Author: Thomas A. Hemphill
Format: Article
Language:English
Published: Emerald Publishing 2025-06-01
Series:Journal of Ethics in Entrepreneurship and Technology
Subjects:
Online Access:https://www.emerald.com/insight/content/doi/10.1108/JEET-01-2025-0003/full/pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Purpose – The purpose of this study is to highlight how tech industry employees and artificial intelligence (AI) scientists are expressing concerns that AI companies have too great financial incentives to avoid effective self-regulating oversight, and that current corporate governance structures cannot change this situation. Design/methodology/approach – This viewpoint takes a narrative approach to describing proposed AI principles to address AI risk management (safety) issues. Findings – This viewpoint recommends that in the USA, a complementary approach, one involving a private governance framework addressing AI safety concerns, whereby the employees share an important role in developing a safe, advanced AI product for commercialization, and a public governance phase of oversight, involving an independent, federal agency administratively testing to meet prescribed safety thresholds. Originality/value – This viewpoint offers a proposal implementing a private/public risk management approach to developing a safe, advanced AI commercial product.
ISSN:2633-7436
2633-7444