Lightweight deep convolutional neural network for background sound classification in speech signals
Dayal, Aveen; Yeduri, Sreenivasa Reddy; Koduru, Balu Harshavardan; Jaiswal, Rahul Kumar; Soumya, J.; Srinivas, M. B.; Pandey, Om Jee; Cenkeramaddi, Linga Reddy
Journal article, Peer reviewed
Published version
View/ Open
Date
2022Metadata
Show full item recordCollections
Original version
Dayal, A., Yeduri, S. R., Koduru, B. H., Jaiswal, R. K., Soumya, J., Srinivas, M. B., Pandey, O. J. & Cenkeramaddi, L. R. (2022). Lightweight deep convolutional neural network for background sound classification in speech signals. Journal of the Acoustical Society of America, 151(4), 2773-2786. https://doi.org/10.1121/10.0010257Abstract
Recognizing background information in human speech signals is a task that is extremely useful in a wide range of practical applications, and many articles on background sound classification have been published. It has not, however, been addressed with background embedded in real-world human speech signals. Thus, this work proposes a lightweight deep convolutional neural network (CNN) in conjunction with spectrograms for an efficient background sound classification with practical human speech signals. The proposed model classifies 11 different background sounds such as airplane, airport, babble, car, drone, exhibition, helicopter, restaurant, station, street, and train sounds embedded in human speech signals. The proposed deep CNN model consists of four convolution layers, four max-pooling layers, and one fully connected layer. The model is tested on human speech signals with varying signal-to-noise ratios (SNRs). Based on the results, the proposed deep CNN model utilizing spectrograms achieves an overall background sound classification accuracy of 95.2% using the human speech signals with a wide range of SNRs. It is also observed that the proposed model outperforms the benchmark models in terms of both accuracy and inference time when evaluated on edge computing devices.