Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking
Visual tracking is still a challenging task due to occlusion, appearance changes, complex motion, etc. We propose a novel RGB-D tracker based on multimodal deep feature fusion (MMDFF) in this paper. MMDFF model consists of four deep Convolutional Neural Networks (CNNs): Motion-specific CNN, RGB- spe...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2018-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2018/5676095 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832563114055303168 |
---|---|
author | Ming-xin Jiang Chao Deng Ming-min Zhang Jing-song Shan Haiyan Zhang |
author_facet | Ming-xin Jiang Chao Deng Ming-min Zhang Jing-song Shan Haiyan Zhang |
author_sort | Ming-xin Jiang |
collection | DOAJ |
description | Visual tracking is still a challenging task due to occlusion, appearance changes, complex motion, etc. We propose a novel RGB-D tracker based on multimodal deep feature fusion (MMDFF) in this paper. MMDFF model consists of four deep Convolutional Neural Networks (CNNs): Motion-specific CNN, RGB- specific CNN, Depth-specific CNN, and RGB-Depth correlated CNN. The depth image is encoded into three channels which are sent into depth-specific CNN to extract deep depth features. The optical flow image is calculated for every frame and then is fed to motion-specific CNN to learn deep motion features. Deep RGB, depth, and motion information can be effectively fused at multiple layers via MMDFF model. Finally, multimodal fusion deep features are sent into the C-COT tracker to obtain the tracking result. For evaluation, experiments are conducted on two recent large-scale RGB-D datasets and results demonstrate that our proposed RGB-D tracking method achieves better performance than other state-of-art RGB-D trackers. |
format | Article |
id | doaj-art-bdbfbf4aa9974775a13232d144873bac |
institution | Kabale University |
issn | 1076-2787 1099-0526 |
language | English |
publishDate | 2018-01-01 |
publisher | Wiley |
record_format | Article |
series | Complexity |
spelling | doaj-art-bdbfbf4aa9974775a13232d144873bac2025-02-03T01:20:58ZengWileyComplexity1076-27871099-05262018-01-01201810.1155/2018/56760955676095Multimodal Deep Feature Fusion (MMDFF) for RGB-D TrackingMing-xin Jiang0Chao Deng1Ming-min Zhang2Jing-song Shan3Haiyan Zhang4Jiangsu Laboratory of Lake Environment Remote Sensing Technologies, Huaiyin Institute of Technology, Huaian, 223003, ChinaSchool of Physics & Electronic Information Engineering, Henan Polytechnic University, Jiaozuo, 454000, ChinaSchool of Computer Science & Technology, Zhejiang University, 310058, ChinaFaculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, ChinaFaculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, ChinaVisual tracking is still a challenging task due to occlusion, appearance changes, complex motion, etc. We propose a novel RGB-D tracker based on multimodal deep feature fusion (MMDFF) in this paper. MMDFF model consists of four deep Convolutional Neural Networks (CNNs): Motion-specific CNN, RGB- specific CNN, Depth-specific CNN, and RGB-Depth correlated CNN. The depth image is encoded into three channels which are sent into depth-specific CNN to extract deep depth features. The optical flow image is calculated for every frame and then is fed to motion-specific CNN to learn deep motion features. Deep RGB, depth, and motion information can be effectively fused at multiple layers via MMDFF model. Finally, multimodal fusion deep features are sent into the C-COT tracker to obtain the tracking result. For evaluation, experiments are conducted on two recent large-scale RGB-D datasets and results demonstrate that our proposed RGB-D tracking method achieves better performance than other state-of-art RGB-D trackers.http://dx.doi.org/10.1155/2018/5676095 |
spellingShingle | Ming-xin Jiang Chao Deng Ming-min Zhang Jing-song Shan Haiyan Zhang Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking Complexity |
title | Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking |
title_full | Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking |
title_fullStr | Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking |
title_full_unstemmed | Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking |
title_short | Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking |
title_sort | multimodal deep feature fusion mmdff for rgb d tracking |
url | http://dx.doi.org/10.1155/2018/5676095 |
work_keys_str_mv | AT mingxinjiang multimodaldeepfeaturefusionmmdffforrgbdtracking AT chaodeng multimodaldeepfeaturefusionmmdffforrgbdtracking AT mingminzhang multimodaldeepfeaturefusionmmdffforrgbdtracking AT jingsongshan multimodaldeepfeaturefusionmmdffforrgbdtracking AT haiyanzhang multimodaldeepfeaturefusionmmdffforrgbdtracking |