Prediction of abnormalities in heart beat sounds using convolutional neural networks

https://doi.org/10.53730/ijhs.v6nS4.11312

Authors

  • P Thendral Assistant Professor Senior, Department of Artificial Intelligence and Data Science, Mepco Schlenk Engineering College, Sivakasi, India
  • S. Karkuzhali Assistant Professor, Department of Computer Science and Engineering, Mepco Schlenk Engineering College, Sivakasi, India
  • V. A. Siyon Department of Artificial Intelligence and Data Science, Mepco Schlenk Engineering College, Sivakasi, India
  • C. Jeyanth Kallis Sweeton Department of Artificial Intelligence and Data Science, Mepco Schlenk Engineering College, Sivakasi, India

Keywords:

convolutional neural network, deep learning, medical acoustics analysis, spectrogram, artificial intelligence healthcare

Abstract

Worldwide physicians prefer to use physical stethoscope and they listen to the heart beat sound and its rhythm to diagnose various heart conditions.  In this work various abnormalities that happened in heart will be reflected in the sound of the heartbeat. In this work we have created a classification system which is based on Convolutional Neural Network (CNN) to analyze the heart beat sound to predict the abnormalities. Heart beat sound is converted to spectrogram images and then CNN is trained with those images. In order to reduce the computational time, pooling is done, so that it reduces the parameters by taking particular pixel from particular part of pixels. The parameters in CNN are varied in the convolution and pooling layers to enhance the accuracy of classification of heart beat sound. Experiment is carried out by varying number of convolutional layers and by changing the pooling methods for various combinations of CNN models and the results are analyzed to find the optimal combination which can suit for sound analysis.

Downloads

Download data is not yet available.

References

Acharya, J., & Basu, A. (2020). Deep neural network for respiratory sound classification in wearable devices enabled by patient specific model tuning. IEEE transactions on biomedical circuits and systems, 14(3), 535-544.

Bahoura, M. (2009). Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes. Computers in biology and medicine, 39(9), 824-843.

Cakır, E., Parascandolo, G., Heittola, T., Huttunen, H., & Virtanen, T. (2017). Convolutional recurrent neural networks for polyphonic sound event detection. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(6), 1291-1303.

Fitria, F., Ahmad, M., Hatijar, H., Argaheni, N. B., & Susanti, N. Y. (2022). Monitoring combination of intermittent auscultation and palpation of contractions on oxygen saturation of newborns. International Journal of Health & Medical Sciences, 5(3), 221-227. https://doi.org/10.21744/ijhms.v5n3.1930

Greco, A., Petkov, N., Saggese, A., & Vento, M. (2020). Aren: A deep learning approach for sound event recognition using a brain inspired representation. IEEE transactions on information forensics and security, 15, 3610-3624.

Jakovljević, N., & Lončar-Turukalo, T. (2017, November). Hidden markov model based respiratory sound classification. In International Conference on Biomedical and Health Informatics (pp. 39-43). Springer, Singapore.

Kochetov, K., Putin, E., Balashov, M., Filchenkov, A., & Shalyto, A. (2018, October). Noise masking recurrent neural network for respiratory sound classification. In International Conference on Artificial Neural Networks (pp. 208-217). Springer, Cham.

Koutini, K., Eghbal-zadeh, H., & Widmer, G. (2021). Receptive field regularization techniques for audio classification and tagging with deep convolutional neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 1987-2000.

Lin, X., Liu, J., & Kang, X. (2016). Audio recapture detection with convolutional neural networks. IEEE Transactions on Multimedia, 18(8), 1480-1487.

Peeters, G., & Richard, G. (2021). Deep Learning for Audio and Music. In Multi-Faceted Deep Learning (pp. 231-266). Springer, Cham.

Perna, D. (2018, December). Convolutional neural networks learning from respiratory data. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 2109-2113). IEEE.

Pramono, R. X. A., Imtiaz, S. A., & Rodriguez-Villegas, E. (2019). Evaluation of features for classification of wheezes and normal respiratory sounds. PloS one, 14(3), e0213659.

Rong, F. (2016, December). Audio classification method based on machine learning. In 2016 International conference on intelligent transportation, big data & smart city (ICITBS) (pp. 81-84). IEEE.

Ryu, H., Park, J., & Shin, H. (2016, September). Classification of heart sound recordings using convolution neural network. In 2016 Computing in Cardiology Conference (CinC) (pp. 1153-1156). IEEE.

Salamon, J., & Bello, J. P. (2017). Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal processing letters, 24(3), 279-283.

Suryasa, I. W., Rodríguez-Gámez, M., & Koldoris, T. (2021). Health and treatment of diabetes mellitus. International Journal of Health Sciences, 5(1), i-v. https://doi.org/10.53730/ijhs.v5n1.2864

Wang, Z., Chen, L., Wang, L., & Diao, G. (2020). Recognition of audio depression based on convolutional neural network and generative antagonism network model. IEEE Access, 8, 101181-101191.

Published

30-07-2022

How to Cite

Thendral, P., Karkuzhali, S., Siyon, V. A., & Sweeton, C. J. K. (2022). Prediction of abnormalities in heart beat sounds using convolutional neural networks. International Journal of Health Sciences, 6(S4), 9844–9855. https://doi.org/10.53730/ijhs.v6nS4.11312

Issue

Section

Peer Review Articles