Improving deep convolutional neural networks with mixed maxout units

The maxout units have the problem of not delivering non-max features, resulting in the insufficient of pooling operation over a subspace that is composed of several linear feature mappings,when they are applied in deep convolutional neural networks.The mixed maxout (mixout) units were proposed to de...

Full description

Saved in:
Bibliographic Details
Main Authors: Hui-zhen ZHAO, Fu-xian LIU, Long-yue LI, Chang LUO
Format: Article
Language:zho
Published: Editorial Department of Journal on Communications 2017-07-01
Series:Tongxin xuebao
Subjects:
Online Access:http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2017145/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The maxout units have the problem of not delivering non-max features, resulting in the insufficient of pooling operation over a subspace that is composed of several linear feature mappings,when they are applied in deep convolutional neural networks.The mixed maxout (mixout) units were proposed to deal with this constrain.Firstly,the exponential probability of the feature mappings getting from different linear transformations was computed.Then,the averaging of a subspace of different feature mappings by the exponential probability was computed.Finally,the output was randomly sampled from the max feature and the mean value by the Bernoulli distribution,leading to the better utilizing of model averaging ability of dropout.The simple models and network in network models was built to evaluate the performance of mixout units.The results show that mixout units based models have better performance.
ISSN:1000-436X