Efficient Domain Knowledge Injection for Bridging the Gap Between Generalized Large Vision Models and Specialized Fabric Defect Tasks

The scarcity of high-quality annotated data poses a significant challenge to the application of deep learning in fabric defect tasks, limiting the generalization and segmentation performance of existing models and impeding their capability to address the complexity of various fabric types and defect...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhewei Chen, Wai Keung Wong, Zuofeng Zhong, Jinpiao Liao, Ying Qu
Format: Article
Language:English
Published: Taylor & Francis Group 2024-12-01
Series:Journal of Natural Fibers
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/15440478.2024.2401525
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The scarcity of high-quality annotated data poses a significant challenge to the application of deep learning in fabric defect tasks, limiting the generalization and segmentation performance of existing models and impeding their capability to address the complexity of various fabric types and defects. To overcome these obstacles, this study introduces an innovative method to infuse specialized knowledge of fabric defects into the Segment Anything Model (SAM), a large-scale visual model. By introducing and training a unique set of fabric defect-related parameters, this approach seamlessly integrates domain-specific knowledge into SAM without the need for extensive modifications to the preexisting model parameters. The revamped SAM model leverages generalized image understanding learned from large-scale natural image datasets while incorporating fabric defect-specific knowledge, ensuring its proficiency in fabric defect segmentation tasks. The experimental results reveal a significant improvement in the model’s segmentation performance, attributable to this novel amalgamation of generic and fabric-specific knowledge. When benchmarking against popular existing segmentation models across three datasets, our proposed model demonstrates a substantial leap in performance. Its impressive results in cross-dataset comparisons and few-shot learning experiments further demonstrate its potential for practical applications in textile quality control.
ISSN:1544-0478
1544-046X