Domain Adaptation for Underwater Image Enhancement via Content and Style Separation

Underwater image suffer from color cast, low contrast and hazy effect, which degraded the high-level vision application. Recent learning-based methods demonstrate astonishing performance on underwater image enhancement, however, most of these works use synthetic pair data for supervised learning and...

Full description

Saved in:
Bibliographic Details
Main Authors: Yu-Wei Chen, Soo-Chang Pei
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9866748/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Underwater image suffer from color cast, low contrast and hazy effect, which degraded the high-level vision application. Recent learning-based methods demonstrate astonishing performance on underwater image enhancement, however, most of these works use synthetic pair data for supervised learning and ignore the domain gap to real-world data. Although some work leverage transfer learning and domain adaptation to alleviate this problem, they target to minimize the latent discrepancy of synthesis and real-world data, and make the latent space hard to interpret and cannot manipulate. To solve this problem, we propose a domain adaptation framework for underwater image enhancement via content and style separation, we aim to separate encoded feature into content and style latent and distinguish style latent from different domains, and process domain adaptation and image enhancement in latent space. Our model provide a user interact interface to adjust different enhanced level for continuous change by latent manipulation. Experiment on various public real-world underwater benchmarks demonstrate that the proposed framework is capable to perform domain adaptation for underwater image enhancement and outperform various state-of-the-art underwater image enhancement algorithms in quantity and quality. The model and source code will be available at <uri>https://github.com/fordevoted/UIESS</uri>.
ISSN:2169-3536