Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System

Digital technologies are undergoing comprehensive integration across diverse domains and processes of the human economy, politics, culture, society, and ecological civilization. This integration brings forth novel concepts, formats, and models. In the context of the accelerated convergence between t...

Full description

Saved in:
Bibliographic Details
Main Authors: Yanying Song, Wei Xiong
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/6/1855
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850279311869739008
author Yanying Song
Wei Xiong
author_facet Yanying Song
Wei Xiong
author_sort Yanying Song
collection DOAJ
description Digital technologies are undergoing comprehensive integration across diverse domains and processes of the human economy, politics, culture, society, and ecological civilization. This integration brings forth novel concepts, formats, and models. In the context of the accelerated convergence between the digital and physical worlds, a discreet yet momentous transformation is being steered by artificial intelligence generated content (AIGC). This transformative force quietly reshapes and potentially disrupts the established patterns of digital content production and consumption. Consequently, it holds the potential to significantly enhance the digital lives of individuals and stands as an indispensable impetus for the comprehensive transition towards a new era of digital civilization in the future. This paper presents our award-winning project, a large language model (LLM)-powered 3D hyper-realistic interactive digital human system that employs automatic speech recognition (ASR), natural language processing (NLP), and emotional text-to-speech (TTS) technologies. Our system is designed with a modular concept and client–server (C/S) distributed architecture that emphasizes the separation of components for scalable development and efficient progress. The paper also discusses the use of computer graphics (CG) and artificial intelligence (AI) in creating photorealistic 3D environments for meta humans, and explores potential applications for this technology.
format Article
id doaj-art-ff5f72d4186341bf817d918eeabd05af
institution OA Journals
issn 1424-8220
language English
publishDate 2025-03-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj-art-ff5f72d4186341bf817d918eeabd05af2025-08-20T01:49:07ZengMDPI AGSensors1424-82202025-03-01256185510.3390/s25061855Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human SystemYanying Song0Wei Xiong1Detroit Green Technology Institute, Hubei University of Technology, Wuhan 430068, ChinaSchool of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan 430068, ChinaDigital technologies are undergoing comprehensive integration across diverse domains and processes of the human economy, politics, culture, society, and ecological civilization. This integration brings forth novel concepts, formats, and models. In the context of the accelerated convergence between the digital and physical worlds, a discreet yet momentous transformation is being steered by artificial intelligence generated content (AIGC). This transformative force quietly reshapes and potentially disrupts the established patterns of digital content production and consumption. Consequently, it holds the potential to significantly enhance the digital lives of individuals and stands as an indispensable impetus for the comprehensive transition towards a new era of digital civilization in the future. This paper presents our award-winning project, a large language model (LLM)-powered 3D hyper-realistic interactive digital human system that employs automatic speech recognition (ASR), natural language processing (NLP), and emotional text-to-speech (TTS) technologies. Our system is designed with a modular concept and client–server (C/S) distributed architecture that emphasizes the separation of components for scalable development and efficient progress. The paper also discusses the use of computer graphics (CG) and artificial intelligence (AI) in creating photorealistic 3D environments for meta humans, and explores potential applications for this technology.https://www.mdpi.com/1424-8220/25/6/1855digital twinsmeta humansautomatic speech recognition (ASR)natural language processing (NLP)large language model (LLM)emotional text-to-speech (TTS)
spellingShingle Yanying Song
Wei Xiong
Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
Sensors
digital twins
meta humans
automatic speech recognition (ASR)
natural language processing (NLP)
large language model (LLM)
emotional text-to-speech (TTS)
title Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
title_full Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
title_fullStr Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
title_full_unstemmed Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
title_short Large Language Model-Driven 3D Hyper-Realistic Interactive Intelligent Digital Human System
title_sort large language model driven 3d hyper realistic interactive intelligent digital human system
topic digital twins
meta humans
automatic speech recognition (ASR)
natural language processing (NLP)
large language model (LLM)
emotional text-to-speech (TTS)
url https://www.mdpi.com/1424-8220/25/6/1855
work_keys_str_mv AT yanyingsong largelanguagemodeldriven3dhyperrealisticinteractiveintelligentdigitalhumansystem
AT weixiong largelanguagemodeldriven3dhyperrealisticinteractiveintelligentdigitalhumansystem