Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent
Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis,...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2013-01-01
|
| Series: | Abstract and Applied Analysis |
| Online Access: | http://dx.doi.org/10.1155/2013/259863 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850237692964503552 |
|---|---|
| author | Zunyi Tang Shuxue Ding Zhenni Li Linlin Jiang |
| author_facet | Zunyi Tang Shuxue Ding Zhenni Li Linlin Jiang |
| author_sort | Zunyi Tang |
| collection | DOAJ |
| description | Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF) with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL) algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD) algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms. |
| format | Article |
| id | doaj-art-1dec5ddff6b94f6b8e4c847a63610b64 |
| institution | OA Journals |
| issn | 1085-3375 1687-0409 |
| language | English |
| publishDate | 2013-01-01 |
| publisher | Wiley |
| record_format | Article |
| series | Abstract and Applied Analysis |
| spelling | doaj-art-1dec5ddff6b94f6b8e4c847a63610b642025-08-20T02:01:40ZengWileyAbstract and Applied Analysis1085-33751687-04092013-01-01201310.1155/2013/259863259863Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate DescentZunyi Tang0Shuxue Ding1Zhenni Li2Linlin Jiang3Graduate School of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu City, Fukushima 965-8580, JapanSchool of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu City, Fukushima 965-8580, JapanGraduate School of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu City, Fukushima 965-8580, JapanDepartment for Student Affairs, University of Aizu, Aizu-Wakamatsu City, Fukushima 965-8580, JapanSparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF) with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL) algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD) algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms.http://dx.doi.org/10.1155/2013/259863 |
| spellingShingle | Zunyi Tang Shuxue Ding Zhenni Li Linlin Jiang Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent Abstract and Applied Analysis |
| title | Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent |
| title_full | Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent |
| title_fullStr | Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent |
| title_full_unstemmed | Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent |
| title_short | Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent |
| title_sort | dictionary learning based on nonnegative matrix factorization using parallel coordinate descent |
| url | http://dx.doi.org/10.1155/2013/259863 |
| work_keys_str_mv | AT zunyitang dictionarylearningbasedonnonnegativematrixfactorizationusingparallelcoordinatedescent AT shuxueding dictionarylearningbasedonnonnegativematrixfactorizationusingparallelcoordinatedescent AT zhennili dictionarylearningbasedonnonnegativematrixfactorizationusingparallelcoordinatedescent AT linlinjiang dictionarylearningbasedonnonnegativematrixfactorizationusingparallelcoordinatedescent |