Dual scale light weight cross attention transformer for skin lesion classification.

Skin cancer is rapidly growing globally. In the past decade, an automated diagnosis system has been developed using image processing and machine learning. The machine learning methods require hand-crafted features, which may affect performance. Recently, a convolution neural network (CNN) was applie...

Full description

Saved in:
Bibliographic Details
Main Authors: Dhirendra Prasad Yadav, Bhisham Sharma, Shivank Chauhan, Julian L Webber, Abolfazl Mehbodniya
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2024-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0312598
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850245955949953024
author Dhirendra Prasad Yadav
Bhisham Sharma
Shivank Chauhan
Julian L Webber
Abolfazl Mehbodniya
author_facet Dhirendra Prasad Yadav
Bhisham Sharma
Shivank Chauhan
Julian L Webber
Abolfazl Mehbodniya
author_sort Dhirendra Prasad Yadav
collection DOAJ
description Skin cancer is rapidly growing globally. In the past decade, an automated diagnosis system has been developed using image processing and machine learning. The machine learning methods require hand-crafted features, which may affect performance. Recently, a convolution neural network (CNN) was applied to dermoscopic images to diagnose skin cancer. The CNN improved its performance through its high-dimension feature extraction capability. However, these methods lack global co-relation of the spatial features. In this study, we design a dual-scale lightweight cross-attention vision transformer network (DSCATNet) that provides global attention to high-dimensional spatial features. In the DSCATNet, we extracted features from different patch sizes and performed cross-attention. The attention from different scales improved the spatial features by focusing on the different parts of the skin lesion. Furthermore, we applied a fusion strategy for the different scale spatial features. After that, enhanced features are fed to the lightweight transformer encoder for global attention. We validated the model superiority on the HAM 10000 and PAD datasets. Furthermore, the model's performance is compared with CNN and ViT-based methods. Our DSCATNet achieved an average kappa and accuracy of 95.84% and 97.80% on the HAM 10000 dataset, respectively. Moreover,the model obtained 94.56% and 95.81% kappa and precision values on the PAD dataset.
format Article
id doaj-art-bfc84f6ca1604aedba7573de1596f1e6
institution OA Journals
issn 1932-6203
language English
publishDate 2024-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-bfc84f6ca1604aedba7573de1596f1e62025-08-20T01:59:18ZengPublic Library of Science (PLoS)PLoS ONE1932-62032024-01-011912e031259810.1371/journal.pone.0312598Dual scale light weight cross attention transformer for skin lesion classification.Dhirendra Prasad YadavBhisham SharmaShivank ChauhanJulian L WebberAbolfazl MehbodniyaSkin cancer is rapidly growing globally. In the past decade, an automated diagnosis system has been developed using image processing and machine learning. The machine learning methods require hand-crafted features, which may affect performance. Recently, a convolution neural network (CNN) was applied to dermoscopic images to diagnose skin cancer. The CNN improved its performance through its high-dimension feature extraction capability. However, these methods lack global co-relation of the spatial features. In this study, we design a dual-scale lightweight cross-attention vision transformer network (DSCATNet) that provides global attention to high-dimensional spatial features. In the DSCATNet, we extracted features from different patch sizes and performed cross-attention. The attention from different scales improved the spatial features by focusing on the different parts of the skin lesion. Furthermore, we applied a fusion strategy for the different scale spatial features. After that, enhanced features are fed to the lightweight transformer encoder for global attention. We validated the model superiority on the HAM 10000 and PAD datasets. Furthermore, the model's performance is compared with CNN and ViT-based methods. Our DSCATNet achieved an average kappa and accuracy of 95.84% and 97.80% on the HAM 10000 dataset, respectively. Moreover,the model obtained 94.56% and 95.81% kappa and precision values on the PAD dataset.https://doi.org/10.1371/journal.pone.0312598
spellingShingle Dhirendra Prasad Yadav
Bhisham Sharma
Shivank Chauhan
Julian L Webber
Abolfazl Mehbodniya
Dual scale light weight cross attention transformer for skin lesion classification.
PLoS ONE
title Dual scale light weight cross attention transformer for skin lesion classification.
title_full Dual scale light weight cross attention transformer for skin lesion classification.
title_fullStr Dual scale light weight cross attention transformer for skin lesion classification.
title_full_unstemmed Dual scale light weight cross attention transformer for skin lesion classification.
title_short Dual scale light weight cross attention transformer for skin lesion classification.
title_sort dual scale light weight cross attention transformer for skin lesion classification
url https://doi.org/10.1371/journal.pone.0312598
work_keys_str_mv AT dhirendraprasadyadav dualscalelightweightcrossattentiontransformerforskinlesionclassification
AT bhishamsharma dualscalelightweightcrossattentiontransformerforskinlesionclassification
AT shivankchauhan dualscalelightweightcrossattentiontransformerforskinlesionclassification
AT julianlwebber dualscalelightweightcrossattentiontransformerforskinlesionclassification
AT abolfazlmehbodniya dualscalelightweightcrossattentiontransformerforskinlesionclassification