Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning
As electronic products continue to evolve in complexity, maintaining stringent quality standards during manufacturing presents mounting challenges. Conventional defect detection approaches, which typically depend on a single modality, often fall short in both efficiency and reliability. To address t...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11062584/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850112411360559104 |
|---|---|
| author | Yulin Liu Yang Gao |
| author_facet | Yulin Liu Yang Gao |
| author_sort | Yulin Liu |
| collection | DOAJ |
| description | As electronic products continue to evolve in complexity, maintaining stringent quality standards during manufacturing presents mounting challenges. Conventional defect detection approaches, which typically depend on a single modality, often fall short in both efficiency and reliability. To address these shortcomings, this study introduces a dynamic multi-modal fusion framework that leverages data from sensors, visual imagery, and component attributes to enhance detection performance. Specifically, Transformer architectures are employed for sensor data analysis, Convolutional Neural Networks (CNNs) are applied to process image data, and Multi-Layer Perceptrons (MLPs) are used to represent part-level features. A distinguishing element of this approach is an energy-based late-stage fusion mechanism that adaptively modulates each modality’s influence according to its uncertainty level. Empirical evaluations demonstrate that the proposed model achieves superior results across multiple performance metrics—including accuracy, precision, recall, and F1 score—compared to conventional and unimodal systems. These findings underscore the model’s potential in advancing practical defect detection and quality assurance in manufacturing environments. |
| format | Article |
| id | doaj-art-1527e778cd1841a1bbe5a5be5af24491 |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-1527e778cd1841a1bbe5a5be5af244912025-08-20T02:37:23ZengIEEEIEEE Access2169-35362025-01-011311856511857310.1109/ACCESS.2025.358455111062584Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep LearningYulin Liu0https://orcid.org/0009-0008-7862-5840Yang Gao1College of Economics and Management, Business Administration, University of Electronic Science and Technology (UESTC), Chengdu, Sichuan, ChinaCollege of Information and Communication Engineering, Electronics and Information, University of Electronic Science and Technology (UESTC), Chengdu, Sichuan, ChinaAs electronic products continue to evolve in complexity, maintaining stringent quality standards during manufacturing presents mounting challenges. Conventional defect detection approaches, which typically depend on a single modality, often fall short in both efficiency and reliability. To address these shortcomings, this study introduces a dynamic multi-modal fusion framework that leverages data from sensors, visual imagery, and component attributes to enhance detection performance. Specifically, Transformer architectures are employed for sensor data analysis, Convolutional Neural Networks (CNNs) are applied to process image data, and Multi-Layer Perceptrons (MLPs) are used to represent part-level features. A distinguishing element of this approach is an energy-based late-stage fusion mechanism that adaptively modulates each modality’s influence according to its uncertainty level. Empirical evaluations demonstrate that the proposed model achieves superior results across multiple performance metrics—including accuracy, precision, recall, and F1 score—compared to conventional and unimodal systems. These findings underscore the model’s potential in advancing practical defect detection and quality assurance in manufacturing environments.https://ieeexplore.ieee.org/document/11062584/Multi-modal fusiondefect detectiontransformerconvolutional neural networksquality control |
| spellingShingle | Yulin Liu Yang Gao Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning IEEE Access Multi-modal fusion defect detection transformer convolutional neural networks quality control |
| title | Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning |
| title_full | Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning |
| title_fullStr | Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning |
| title_full_unstemmed | Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning |
| title_short | Multi-Modal Dynamic Fusion for Defect Detection in Electronic Products: A Novel Approach Based on Energy and Deep Learning |
| title_sort | multi modal dynamic fusion for defect detection in electronic products a novel approach based on energy and deep learning |
| topic | Multi-modal fusion defect detection transformer convolutional neural networks quality control |
| url | https://ieeexplore.ieee.org/document/11062584/ |
| work_keys_str_mv | AT yulinliu multimodaldynamicfusionfordefectdetectioninelectronicproductsanovelapproachbasedonenergyanddeeplearning AT yanggao multimodaldynamicfusionfordefectdetectioninelectronicproductsanovelapproachbasedonenergyanddeeplearning |