A hybrid model based on transformer and Mamba for enhanced sequence modeling
Abstract State Space Models (SSMs) have made remarkable strides in language modeling in recent years. With the introduction of Mamba, these models have garnered increased attention, often surpassing Transformers in specific areas. Nevertheless, despite Mamba’s unique strengths, Transformers remain e...
Saved in:
| Main Authors: | Xiaocui Zhu, Qunsheng Ruan, Sai Qian, Miaohui Zhang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-04-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-87574-8 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
CDA-mamba: cross-directional attention mamba for enhanced 3D medical image segmentation
by: Jiashu Xu, et al.
Published: (2025-07-01) -
ViTMa: A Novel Hybrid Vision Transformer and Mamba for Kinship Recognition in Indonesian Facial Micro-Expressions
by: Ike Fibriani, et al.
Published: (2024-01-01) -
VMDU-net: a dual encoder multi-scale fusion network for polyp segmentation with Vision Mamba and Cross-Shape Transformer integration
by: Peng Li, et al.
Published: (2025-06-01) -
MambaPose: A Human Pose Estimation Based on Gated Feedforward Network and Mamba
by: Jianqiang Zhang, et al.
Published: (2024-12-01) -
Mamba and cross-channel aggregation for efficient multispectral image compression
by: Jingang Wang, et al.
Published: (2025-08-01)