Dynamic Warping Network for Semantic Video Segmentation

A major challenge for semantic video segmentation is how to exploit the spatiotemporal information and produce consistent results for a video sequence. Many previous works utilize the precomputed optical flow to warp the feature maps across adjacent frames. However, the imprecise optical flow and th...

Full description

Saved in:
Bibliographic Details
Main Authors: Jiangyun Li, Yikai Zhao, Xingjian He, Xinxin Zhu, Jing Liu
Format: Article
Language:English
Published: Wiley 2021-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2021/6680509
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832560121364873216
author Jiangyun Li
Yikai Zhao
Xingjian He
Xinxin Zhu
Jing Liu
author_facet Jiangyun Li
Yikai Zhao
Xingjian He
Xinxin Zhu
Jing Liu
author_sort Jiangyun Li
collection DOAJ
description A major challenge for semantic video segmentation is how to exploit the spatiotemporal information and produce consistent results for a video sequence. Many previous works utilize the precomputed optical flow to warp the feature maps across adjacent frames. However, the imprecise optical flow and the warping operation without any learnable parameters may not achieve accurate feature warping and only bring a slight improvement. In this paper, we propose a novel framework named Dynamic Warping Network (DWNet) to adaptively warp the interframe features for improving the accuracy of warping-based models. Firstly, we design a flow refinement module (FRM) to optimize the precomputed optical flow. Then, we propose a flow-guided convolution (FG-Conv) to achieve the adaptive feature warping based on the refined optical flow. Furthermore, we introduce the temporal consistency loss including the feature consistency loss and prediction consistency loss to explicitly supervise the warped features instead of simple feature propagation and fusion, which guarantees the temporal consistency of video segmentation. Note that our DWNet adopts extra constraints to improve the temporal consistency in the training phase, while no additional calculation and postprocessing are required during inference. Extensive experiments show that our DWNet can achieve consistent improvement over various strong baselines and achieves state-of-the-art accuracy on the Cityscapes and CamVid benchmark datasets.
format Article
id doaj-art-80fa088562bd490896efaddb2eaac69b
institution Kabale University
issn 1076-2787
1099-0526
language English
publishDate 2021-01-01
publisher Wiley
record_format Article
series Complexity
spelling doaj-art-80fa088562bd490896efaddb2eaac69b2025-02-03T01:28:25ZengWileyComplexity1076-27871099-05262021-01-01202110.1155/2021/66805096680509Dynamic Warping Network for Semantic Video SegmentationJiangyun Li0Yikai Zhao1Xingjian He2Xinxin Zhu3Jing Liu4School of Automation & Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaSchool of Automation & Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaNational Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100083, ChinaNational Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100083, ChinaSchool of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100083, ChinaA major challenge for semantic video segmentation is how to exploit the spatiotemporal information and produce consistent results for a video sequence. Many previous works utilize the precomputed optical flow to warp the feature maps across adjacent frames. However, the imprecise optical flow and the warping operation without any learnable parameters may not achieve accurate feature warping and only bring a slight improvement. In this paper, we propose a novel framework named Dynamic Warping Network (DWNet) to adaptively warp the interframe features for improving the accuracy of warping-based models. Firstly, we design a flow refinement module (FRM) to optimize the precomputed optical flow. Then, we propose a flow-guided convolution (FG-Conv) to achieve the adaptive feature warping based on the refined optical flow. Furthermore, we introduce the temporal consistency loss including the feature consistency loss and prediction consistency loss to explicitly supervise the warped features instead of simple feature propagation and fusion, which guarantees the temporal consistency of video segmentation. Note that our DWNet adopts extra constraints to improve the temporal consistency in the training phase, while no additional calculation and postprocessing are required during inference. Extensive experiments show that our DWNet can achieve consistent improvement over various strong baselines and achieves state-of-the-art accuracy on the Cityscapes and CamVid benchmark datasets.http://dx.doi.org/10.1155/2021/6680509
spellingShingle Jiangyun Li
Yikai Zhao
Xingjian He
Xinxin Zhu
Jing Liu
Dynamic Warping Network for Semantic Video Segmentation
Complexity
title Dynamic Warping Network for Semantic Video Segmentation
title_full Dynamic Warping Network for Semantic Video Segmentation
title_fullStr Dynamic Warping Network for Semantic Video Segmentation
title_full_unstemmed Dynamic Warping Network for Semantic Video Segmentation
title_short Dynamic Warping Network for Semantic Video Segmentation
title_sort dynamic warping network for semantic video segmentation
url http://dx.doi.org/10.1155/2021/6680509
work_keys_str_mv AT jiangyunli dynamicwarpingnetworkforsemanticvideosegmentation
AT yikaizhao dynamicwarpingnetworkforsemanticvideosegmentation
AT xingjianhe dynamicwarpingnetworkforsemanticvideosegmentation
AT xinxinzhu dynamicwarpingnetworkforsemanticvideosegmentation
AT jingliu dynamicwarpingnetworkforsemanticvideosegmentation