Continuous robust sound event classification using time-frequency features and deep learning.

The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature repr...

Full description

Saved in:
Bibliographic Details
Main Authors: Ian McLoughlin, Haomin Zhang, Zhipeng Xie, Yan Song, Wei Xiao, Huy Phan
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2017-01-01
Series:PLoS ONE
Online Access:https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182309&type=printable
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849766256554541056
author Ian McLoughlin
Haomin Zhang
Zhipeng Xie
Yan Song
Wei Xiao
Huy Phan
author_facet Ian McLoughlin
Haomin Zhang
Zhipeng Xie
Yan Song
Wei Xiao
Huy Phan
author_sort Ian McLoughlin
collection DOAJ
description The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.
format Article
id doaj-art-87142f3ebc184ad8a5e03141eb9077d6
institution DOAJ
issn 1932-6203
language English
publishDate 2017-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-87142f3ebc184ad8a5e03141eb9077d62025-08-20T03:04:38ZengPublic Library of Science (PLoS)PLoS ONE1932-62032017-01-01129e018230910.1371/journal.pone.0182309Continuous robust sound event classification using time-frequency features and deep learning.Ian McLoughlinHaomin ZhangZhipeng XieYan SongWei XiaoHuy PhanThe automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182309&type=printable
spellingShingle Ian McLoughlin
Haomin Zhang
Zhipeng Xie
Yan Song
Wei Xiao
Huy Phan
Continuous robust sound event classification using time-frequency features and deep learning.
PLoS ONE
title Continuous robust sound event classification using time-frequency features and deep learning.
title_full Continuous robust sound event classification using time-frequency features and deep learning.
title_fullStr Continuous robust sound event classification using time-frequency features and deep learning.
title_full_unstemmed Continuous robust sound event classification using time-frequency features and deep learning.
title_short Continuous robust sound event classification using time-frequency features and deep learning.
title_sort continuous robust sound event classification using time frequency features and deep learning
url https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182309&type=printable
work_keys_str_mv AT ianmcloughlin continuousrobustsoundeventclassificationusingtimefrequencyfeaturesanddeeplearning
AT haominzhang continuousrobustsoundeventclassificationusingtimefrequencyfeaturesanddeeplearning
AT zhipengxie continuousrobustsoundeventclassificationusingtimefrequencyfeaturesanddeeplearning
AT yansong continuousrobustsoundeventclassificationusingtimefrequencyfeaturesanddeeplearning
AT weixiao continuousrobustsoundeventclassificationusingtimefrequencyfeaturesanddeeplearning
AT huyphan continuousrobustsoundeventclassificationusingtimefrequencyfeaturesanddeeplearning