P-CNN: Percept-CNN for semantic segmentation
The task of image segmentation remains a fundamental challenge, in the field of computer vision. Convolutional Neural Networks (CNNs) have achieved significant success in this field, yet there are some limitations in the conventional approach. The process of accurate, pixel-wise image annotation is...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Taylor & Francis Group
2024-12-01
|
Series: | Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization |
Subjects: | |
Online Access: | https://www.tandfonline.com/doi/10.1080/21681163.2024.2387458 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1846149853094084608 |
---|---|
author | Deepak Hegde G. N. Balaji |
author_facet | Deepak Hegde G. N. Balaji |
author_sort | Deepak Hegde |
collection | DOAJ |
description | The task of image segmentation remains a fundamental challenge, in the field of computer vision. Convolutional Neural Networks (CNNs) have achieved significant success in this field, yet there are some limitations in the conventional approach. The process of accurate, pixel-wise image annotation is time-consuming, as well as requires more human effort. These problems are addressed by the proposed method called as Percept-CNN (P-CNN), which extracts the power of percepts, pixels responsible for the higher activations at each level. From each layer, percepts are extracted during the forward propagation. These percepts are then passed onto the subsequent layers, enabling the model to focus only on the useful visual information. The proposed method with Percept Convolution can potentially eliminate the complex and time-consuming task of image annotation without affecting the segmentation accuracy. Since the model focuses only on the useful salient visual information, it tends to reduce the extraction of the redundant features, which doesn’t really contribute towards the final goal. This makes the model to be more robust, accurate and efficient. The proposed model was able to perform semantic segmentation without pixelwise annotations with an accuracy of 67% when tested on Oxford IIIT pet dataset. |
format | Article |
id | doaj-art-aff6f82c945741eca238d1d9913aa0b7 |
institution | Kabale University |
issn | 2168-1163 2168-1171 |
language | English |
publishDate | 2024-12-01 |
publisher | Taylor & Francis Group |
record_format | Article |
series | Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization |
spelling | doaj-art-aff6f82c945741eca238d1d9913aa0b72024-11-29T10:29:55ZengTaylor & Francis GroupComputer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization2168-11632168-11712024-12-0112110.1080/21681163.2024.2387458P-CNN: Percept-CNN for semantic segmentationDeepak Hegde0G. N. Balaji1School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, IndiaSchool of Computer Science and Engineering, Vellore Institute of Technology, Vellore, IndiaThe task of image segmentation remains a fundamental challenge, in the field of computer vision. Convolutional Neural Networks (CNNs) have achieved significant success in this field, yet there are some limitations in the conventional approach. The process of accurate, pixel-wise image annotation is time-consuming, as well as requires more human effort. These problems are addressed by the proposed method called as Percept-CNN (P-CNN), which extracts the power of percepts, pixels responsible for the higher activations at each level. From each layer, percepts are extracted during the forward propagation. These percepts are then passed onto the subsequent layers, enabling the model to focus only on the useful visual information. The proposed method with Percept Convolution can potentially eliminate the complex and time-consuming task of image annotation without affecting the segmentation accuracy. Since the model focuses only on the useful salient visual information, it tends to reduce the extraction of the redundant features, which doesn’t really contribute towards the final goal. This makes the model to be more robust, accurate and efficient. The proposed model was able to perform semantic segmentation without pixelwise annotations with an accuracy of 67% when tested on Oxford IIIT pet dataset.https://www.tandfonline.com/doi/10.1080/21681163.2024.2387458Image segmentationCNNP-CNNpercepts |
spellingShingle | Deepak Hegde G. N. Balaji P-CNN: Percept-CNN for semantic segmentation Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization Image segmentation CNN P-CNN percepts |
title | P-CNN: Percept-CNN for semantic segmentation |
title_full | P-CNN: Percept-CNN for semantic segmentation |
title_fullStr | P-CNN: Percept-CNN for semantic segmentation |
title_full_unstemmed | P-CNN: Percept-CNN for semantic segmentation |
title_short | P-CNN: Percept-CNN for semantic segmentation |
title_sort | p cnn percept cnn for semantic segmentation |
topic | Image segmentation CNN P-CNN percepts |
url | https://www.tandfonline.com/doi/10.1080/21681163.2024.2387458 |
work_keys_str_mv | AT deepakhegde pcnnperceptcnnforsemanticsegmentation AT gnbalaji pcnnperceptcnnforsemanticsegmentation |