Survey of Multimodal Federated Learning: Exploring Data Integration, Challenges, and Future Directions

The rapidly expanding demand for intelligent wireless applications and the Internet of Things (IoT) requires advanced system designs to handle multimodal data effectively while ensuring user privacy and data security. Traditional machine learning (ML) models rely on centralized architectures, which,...

Full description

Saved in:
Bibliographic Details
Main Authors: Mumin Adam, Abdullatif Albaseer, Uthman Baroudi, Mohamed Abdallah
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Open Journal of the Communications Society
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10938626/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The rapidly expanding demand for intelligent wireless applications and the Internet of Things (IoT) requires advanced system designs to handle multimodal data effectively while ensuring user privacy and data security. Traditional machine learning (ML) models rely on centralized architectures, which, while powerful, often present significant privacy risks due to the centralization of sensitive data. Federated Learning (FL) is a promising decentralized alternative for addressing these issues. However, FL predominantly handles unimodal data, which limits its applicability in environments where devices collect and process various data types such as text, images, and sensor output. To address this limitation, Multimodal FL (MMFL) integrates multiple data modalities, enabling a richer and more holistic understanding of data. In this survey, we explore the challenges and advancements in MMFL, including data representation, fusion techniques, and cross-modal learning strategies. We present a comprehensive taxonomy of MMFL, outlining critical challenges such as modality imbalance, fusion complexity, and security concerns. Additionally, we highlight the role of transformers in MMFL by leveraging their powerful attention mechanisms to process multimodal data in a federated setting. Finally, we discuss various applications of MMFL, including healthcare, human activity recognition, and emotion recognition, and propose future research directions for improving the scalability and robustness of MMFL systems in real-world scenarios.
ISSN:2644-125X