Beyond human-in-the-loop: Sensemaking between artificial intelligence and human intelligence collaboration

In contemporary operational environments, decision-making is increasingly shaped by the interaction between intuitive, fast-acting System 1 processes and slow, analytical System 2 reasoning. Human intelligence (HI) navigates fluidly between these cognitive modes, enabling adaptive responses to both...

Full description

Saved in:
Bibliographic Details
Main Authors: Xinyue Hao, Emrah Demir, Daniel Eyers
Format: Article
Language:English
Published: Elsevier 2025-12-01
Series:Sustainable Futures
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666188825007166
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In contemporary operational environments, decision-making is increasingly shaped by the interaction between intuitive, fast-acting System 1 processes and slow, analytical System 2 reasoning. Human intelligence (HI) navigates fluidly between these cognitive modes, enabling adaptive responses to both structured and ambiguous situations. In parallel, artificial intelligence (AI) has rapidly evolved to support tasks typically associated with System 2 reasoning, such as optimization, forecasting, and rule-based analysis, with speed and precision that in certain structured contexts can exceed human capabilities. To investigate how AI and HI collaborate in practice, we conducted 28 in-depth interviews across 9 leading firms recognized as benchmarks in AI adoption within operations and supply chain management (OSCM). These interviews targeted key HI agents, operations managers, data scientists, and algorithm engineers, and were situated within carefully selected, AI-rich scenarios. Using a sensemaking framework and cognitive mapping methodology, we explored how HI interpret and interact with AI across pre-development, deployment, and post-development phases. Our findings reveal that collaboration is a dynamic and co-constitutive process of institutional co-production, structured by epistemic asymmetry, symbolic accountability, and infrastructural interdependence. While AI contributes speed, scale, and pattern recognition in routine, structured environments, human actors provide ethical oversight, contextual judgment, and strategic interpretation, particularly vital in uncertain or ethically charged contexts. Moving beyond static models such as “human-in-the-loop” or “AI-assistance,” this study offers a novel framework that conceptualizes AI and HI collaboration as a sociotechnical system. Theoretically, it bridges fragmented literatures in AI, cognitive science, and institutional theory. Practically, it offers actionable insights for designing collaborative infrastructures that are both ethically aligned and organizationally resilient. As AI ecosystems grow more complex and decentralized, our findings highlight the need for reflexive governance mechanisms to support adaptive, interpretable, and accountable human–machine decision-making.
ISSN:2666-1888