A Benchmark for Multi-Task Evaluation of Pretrained Models in Medical Report Generation
MRG for medical images has become increasingly important due to the growing workload of radiologists in hospitals. However, current studies in the MRG field predominantly focus on specific modal- ities or training foundation models with a notable lack of research evaluating the impact of pre-trained...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
EDP Sciences
2025-01-01
|
| Series: | BIO Web of Conferences |
| Online Access: | https://www.bio-conferences.org/articles/bioconf/pdf/2025/25/bioconf_icbb2025_03010.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | MRG for medical images has become increasingly important due to the growing workload of radiologists in hospitals. However, current studies in the MRG field predominantly focus on specific modal- ities or training foundation models with a notable lack of research evaluating the impact of pre-trained models on performance across different tasks, particularly their cross-task capabilities. This study introduces a novel benchmark for medical multi-task learning that encompasses four medical modalities: CT, X-ray, ultrasound, and pathology. We believe this benchmark can provide a robust comparative basis for future research in this field. More importantly, we conduct an in-depth analysis comparing modality-specific pre-trained models, natural domain pre-trained models, and medical foundation pre-trained models. Our findings indicate that medical foundation pre-trained models generally outperform other pre-trained models across all tasks, while natural domain pre-trained models exhibit superior performance in cross-modality tasks. Our source code is available at https://github.com/Reckless0/MT-Med.git. |
|---|---|
| ISSN: | 2117-4458 |