Sparse Deep Nonnegative Matrix Factorization
Nonnegative Matrix Factorization (NMF) is a powerful technique to perform dimension reduction and pattern recognition through single-layer data representation learning. However, deep learning networks, with their carefully designed hierarchical structure, can combine hidden features to form more rep...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Tsinghua University Press
2020-03-01
|
Series: | Big Data Mining and Analytics |
Subjects: | |
Online Access: | https://www.sciopen.com/article/10.26599/BDMA.2019.9020020 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832573610380754944 |
---|---|
author | Zhenxing Guo Shihua Zhang |
author_facet | Zhenxing Guo Shihua Zhang |
author_sort | Zhenxing Guo |
collection | DOAJ |
description | Nonnegative Matrix Factorization (NMF) is a powerful technique to perform dimension reduction and pattern recognition through single-layer data representation learning. However, deep learning networks, with their carefully designed hierarchical structure, can combine hidden features to form more representative features for pattern recognition. In this paper, we proposed sparse deep NMF models to analyze complex data for more accurate classification and better feature interpretation. Such models are designed to learn localized features or generate more discriminative representations for samples in distinct classes by imposing L1-norm penalty on the columns of certain factors. By extending a one-layer model into a multilayer model with sparsity, we provided a hierarchical way to analyze big data and intuitively extract hidden features due to nonnegativity. We adopted the Nesterov's accelerated gradient algorithm to accelerate the computing process. We also analyzed the computing complexity of our frameworks to demonstrate their efficiency. To improve the performance of dealing with linearly inseparable data, we also considered to incorporate popular nonlinear functions into these frameworks and explored their performance. We applied our models using two benchmarking image datasets, and the results showed that our models can achieve competitive or better classification performance and produce intuitive interpretations compared with the typical NMF and competing multilayer models. |
format | Article |
id | doaj-art-335bdfb79472435cae5d991ef664d710 |
institution | Kabale University |
issn | 2096-0654 |
language | English |
publishDate | 2020-03-01 |
publisher | Tsinghua University Press |
record_format | Article |
series | Big Data Mining and Analytics |
spelling | doaj-art-335bdfb79472435cae5d991ef664d7102025-02-02T03:45:08ZengTsinghua University PressBig Data Mining and Analytics2096-06542020-03-0131132810.26599/BDMA.2019.9020020Sparse Deep Nonnegative Matrix FactorizationZhenxing Guo0Shihua Zhang1<institution content-type="dept">Academy of Mathematics and Systems Science</institution>, <institution>Chinese Academy of Sciences (CAS)</institution>, <city>Beijing</city> <postal-code>100190</postal-code>, <country>China</country>.<institution content-type="dept">NCMIS, CEMS, RCSDS, Academy of Mathematics and Systems Science</institution>, <institution>Chinese Academy of Sciences</institution>, <city>Beijing</city> <postal-code>100190</postal-code>, <country>China</country>, and also with the <institution content-type="dept">School of Mathematical Sciences</institution>, <institution>University of Chinese Academy of Sciences</institution>, <city>Beijing </city><postal-code>100049</postal-code>, <country>China</country>.Nonnegative Matrix Factorization (NMF) is a powerful technique to perform dimension reduction and pattern recognition through single-layer data representation learning. However, deep learning networks, with their carefully designed hierarchical structure, can combine hidden features to form more representative features for pattern recognition. In this paper, we proposed sparse deep NMF models to analyze complex data for more accurate classification and better feature interpretation. Such models are designed to learn localized features or generate more discriminative representations for samples in distinct classes by imposing L1-norm penalty on the columns of certain factors. By extending a one-layer model into a multilayer model with sparsity, we provided a hierarchical way to analyze big data and intuitively extract hidden features due to nonnegativity. We adopted the Nesterov's accelerated gradient algorithm to accelerate the computing process. We also analyzed the computing complexity of our frameworks to demonstrate their efficiency. To improve the performance of dealing with linearly inseparable data, we also considered to incorporate popular nonlinear functions into these frameworks and explored their performance. We applied our models using two benchmarking image datasets, and the results showed that our models can achieve competitive or better classification performance and produce intuitive interpretations compared with the typical NMF and competing multilayer models.https://www.sciopen.com/article/10.26599/BDMA.2019.9020020sparse nonnegative matrix factorization (nmf)deep learningnesterov’s accelerated gradient algorithm |
spellingShingle | Zhenxing Guo Shihua Zhang Sparse Deep Nonnegative Matrix Factorization Big Data Mining and Analytics sparse nonnegative matrix factorization (nmf) deep learning nesterov’s accelerated gradient algorithm |
title | Sparse Deep Nonnegative Matrix Factorization |
title_full | Sparse Deep Nonnegative Matrix Factorization |
title_fullStr | Sparse Deep Nonnegative Matrix Factorization |
title_full_unstemmed | Sparse Deep Nonnegative Matrix Factorization |
title_short | Sparse Deep Nonnegative Matrix Factorization |
title_sort | sparse deep nonnegative matrix factorization |
topic | sparse nonnegative matrix factorization (nmf) deep learning nesterov’s accelerated gradient algorithm |
url | https://www.sciopen.com/article/10.26599/BDMA.2019.9020020 |
work_keys_str_mv | AT zhenxingguo sparsedeepnonnegativematrixfactorization AT shihuazhang sparsedeepnonnegativematrixfactorization |