Environment Sound Classification using Multiple Feature Channels and Attention based Deep Convolutional Neural Network
Peer reviewed, Journal article
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3056508Utgivelsesdato
2020Metadata
Vis full innførselSamlinger
Originalversjon
Sharma, J., Granmo, O-C. & Goodwin, M. (2020). Environment Sound Classification using Multiple Feature Channels and Attention based Deep Convolutional Neural Network. Interspeech, 2020, 1186-1190. https://doi.org/10.21437/Interspeech.2020-1303Sammendrag
In this paper, we propose a model for the Environment Sound Classification Task (ESC) that consists of multiple feature channels given as input to a Deep Convolutional Neural Network (CNN) with Attention mechanism. The novelty of the paper lies in using multiple feature channels consisting of Mel-Frequency Cepstral Coefficients (MFCC), Gammatone Frequency Cepstral Coefficients (GFCC), the Constant Q-transform CQT) and Chromagram. And, we employ a deeper CNN (DCNN) compared to previous models, consisting of spatially separable convolutions working on time and feature domain separately. Alongside, we use attention odules that perform channel and spatial attention together. We use the mix-up data augmentation technique to further boost performance. Our model is able to achieve state-of-the-art performance on three enchmark environment sound classification datasets, i.e. the UrbanSound8K (97.52%), ESC-10 (94.75%) and ESC-50 (87.45%).
Beskrivelse
Author's accepted manuscript