Semantic‐aware visual consistency network for fused image harmonisation
Abstract With a focus on integrated sensing, communication, and computation (ISCC) systems, multiple sensor devices collect information of different objects and upload it to data processing servers for fusion. Appearance gaps in composite images caused by distinct capture conditions can degrade the...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-06-01
|
Series: | IET Signal Processing |
Subjects: | |
Online Access: | https://doi.org/10.1049/sil2.12219 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Abstract With a focus on integrated sensing, communication, and computation (ISCC) systems, multiple sensor devices collect information of different objects and upload it to data processing servers for fusion. Appearance gaps in composite images caused by distinct capture conditions can degrade the visual quality and affect the accuracy of other image processing and analysis results. The authors propose a fused‐image harmonisation method that aims to eliminate appearance gaps among different objects. First, the authors modify a lightweight image harmonisation backbone and combined it with a pretrained segmentation model, in which the extracted semantic features were fed to both the encoder and decoder. Then the authors implement a semantic‐related background‐to‐foreground style transfer by leveraging spatial separation adaptive instance normalisation (SAIN). To better preserve the input semantic information, the authors design a simple and effective semantic‐aware adaptive denormalisation (SADE) module. Experimental results demonstrate that the authors’ proposed method achieves competitive performance on the iHarmony4 dataset and benefits from the harmonisation of fused images with incompatible appearance gaps. |
---|---|
ISSN: | 1751-9675 1751-9683 |