Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework
Artificial neural network (ANN) models are widely used in various fields such as image classification, multi-object detection, intent prediction, military applications, and natural language processing. However, artificial intelligence (AI) models for continual learning (CL) are not yet mature, and &...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11121184/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849223088057614336 |
|---|---|
| author | Zuomin Yang Anis Salwa Mohd Khairuddin Joon Huang Chuah Wei Ru Wong Xin Xu Hafiz Muhammad Fahad Noman Qiyuan Qin |
| author_facet | Zuomin Yang Anis Salwa Mohd Khairuddin Joon Huang Chuah Wei Ru Wong Xin Xu Hafiz Muhammad Fahad Noman Qiyuan Qin |
| author_sort | Zuomin Yang |
| collection | DOAJ |
| description | Artificial neural network (ANN) models are widely used in various fields such as image classification, multi-object detection, intent prediction, military applications, and natural language processing. However, artificial intelligence (AI) models for continual learning (CL) are not yet mature, and “catastrophic forgetting (CF)” is still a typical problem. The study of biological neural networks (BNNs) and ANN models still needs further exploration. Therefore, this paper mainly explores the pre- and postsynaptic structures, the synaptic cleft, the early and late stages of long-term potentiation, and the effects of neurotransmitters on synaptic excitation and inhibition. We emphasize the necessity of integrating biological neural systems and ANN models in learning and memory. Based on the “Prompt Pool”, this paper designs a hybrid neural network (HNN) architecture that integrates convolutional neural networks (CNNs), vision transformers (ViT), prompt pools, and adapters to alleviate the “CF” problem in task incremental learning (TIL). Compared with the existing ViT and prompt pool architecture, this method shows higher performance in final task training and also shows certain advantages in the persistence of TIL. In the future, based on the principles of biological neuroscience, we will further apply the HNN model to image classification and multi-object detection tasks in autonomous driving. By gaining a deeper understanding of the BNN mechanisms, we will develop efficient HNN models that can adapt to dynamic environments and provide new solutions for CL. |
| format | Article |
| id | doaj-art-2f267e4a57e6426d9a20f3fa5d5aba87 |
| institution | Kabale University |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-2f267e4a57e6426d9a20f3fa5d5aba872025-08-25T23:11:56ZengIEEEIEEE Access2169-35362025-01-011314522314524210.1109/ACCESS.2025.359702011121184Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) FrameworkZuomin Yang0https://orcid.org/0000-0003-4776-4887Anis Salwa Mohd Khairuddin1https://orcid.org/0000-0002-9873-4779Joon Huang Chuah2https://orcid.org/0000-0001-9058-3497Wei Ru Wong3https://orcid.org/0000-0002-7643-6172Xin Xu4https://orcid.org/0000-0002-9194-1249Hafiz Muhammad Fahad Noman5https://orcid.org/0000-0001-8507-5383Qiyuan Qin6https://orcid.org/0009-0005-2736-7667Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Mechanical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaDepartment of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, MalaysiaArtificial neural network (ANN) models are widely used in various fields such as image classification, multi-object detection, intent prediction, military applications, and natural language processing. However, artificial intelligence (AI) models for continual learning (CL) are not yet mature, and “catastrophic forgetting (CF)” is still a typical problem. The study of biological neural networks (BNNs) and ANN models still needs further exploration. Therefore, this paper mainly explores the pre- and postsynaptic structures, the synaptic cleft, the early and late stages of long-term potentiation, and the effects of neurotransmitters on synaptic excitation and inhibition. We emphasize the necessity of integrating biological neural systems and ANN models in learning and memory. Based on the “Prompt Pool”, this paper designs a hybrid neural network (HNN) architecture that integrates convolutional neural networks (CNNs), vision transformers (ViT), prompt pools, and adapters to alleviate the “CF” problem in task incremental learning (TIL). Compared with the existing ViT and prompt pool architecture, this method shows higher performance in final task training and also shows certain advantages in the persistence of TIL. In the future, based on the principles of biological neuroscience, we will further apply the HNN model to image classification and multi-object detection tasks in autonomous driving. By gaining a deeper understanding of the BNN mechanisms, we will develop efficient HNN models that can adapt to dynamic environments and provide new solutions for CL.https://ieeexplore.ieee.org/document/11121184/Artificial neural networkscontinual learninghybrid neural networkssynaptic plasticityvision transformer |
| spellingShingle | Zuomin Yang Anis Salwa Mohd Khairuddin Joon Huang Chuah Wei Ru Wong Xin Xu Hafiz Muhammad Fahad Noman Qiyuan Qin Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework IEEE Access Artificial neural networks continual learning hybrid neural networks synaptic plasticity vision transformer |
| title | Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework |
| title_full | Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework |
| title_fullStr | Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework |
| title_full_unstemmed | Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework |
| title_short | Enhancing Task-Incremental Learning via a Prompt-Based Hybrid Convolutional Neural Networks (CNNs)-Vision Transformer (ViT) Framework |
| title_sort | enhancing task incremental learning via a prompt based hybrid convolutional neural networks cnns vision transformer vit framework |
| topic | Artificial neural networks continual learning hybrid neural networks synaptic plasticity vision transformer |
| url | https://ieeexplore.ieee.org/document/11121184/ |
| work_keys_str_mv | AT zuominyang enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework AT anissalwamohdkhairuddin enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework AT joonhuangchuah enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework AT weiruwong enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework AT xinxu enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework AT hafizmuhammadfahadnoman enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework AT qiyuanqin enhancingtaskincrementallearningviaapromptbasedhybridconvolutionalneuralnetworkscnnsvisiontransformervitframework |