Prediction of abnormalities in heart beat sounds using convolutional neural networks
Keywords:
convolutional neural network, deep learning, medical acoustics analysis, spectrogram, artificial intelligence healthcareAbstract
Worldwide physicians prefer to use physical stethoscope and they listen to the heart beat sound and its rhythm to diagnose various heart conditions. In this work various abnormalities that happened in heart will be reflected in the sound of the heartbeat. In this work we have created a classification system which is based on Convolutional Neural Network (CNN) to analyze the heart beat sound to predict the abnormalities. Heart beat sound is converted to spectrogram images and then CNN is trained with those images. In order to reduce the computational time, pooling is done, so that it reduces the parameters by taking particular pixel from particular part of pixels. The parameters in CNN are varied in the convolution and pooling layers to enhance the accuracy of classification of heart beat sound. Experiment is carried out by varying number of convolutional layers and by changing the pooling methods for various combinations of CNN models and the results are analyzed to find the optimal combination which can suit for sound analysis.
Downloads
References
Acharya, J., & Basu, A. (2020). Deep neural network for respiratory sound classification in wearable devices enabled by patient specific model tuning. IEEE transactions on biomedical circuits and systems, 14(3), 535-544.
Bahoura, M. (2009). Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes. Computers in biology and medicine, 39(9), 824-843.
Cakır, E., Parascandolo, G., Heittola, T., Huttunen, H., & Virtanen, T. (2017). Convolutional recurrent neural networks for polyphonic sound event detection. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(6), 1291-1303.
Fitria, F., Ahmad, M., Hatijar, H., Argaheni, N. B., & Susanti, N. Y. (2022). Monitoring combination of intermittent auscultation and palpation of contractions on oxygen saturation of newborns. International Journal of Health & Medical Sciences, 5(3), 221-227. https://doi.org/10.21744/ijhms.v5n3.1930
Greco, A., Petkov, N., Saggese, A., & Vento, M. (2020). Aren: A deep learning approach for sound event recognition using a brain inspired representation. IEEE transactions on information forensics and security, 15, 3610-3624.
Jakovljević, N., & Lončar-Turukalo, T. (2017, November). Hidden markov model based respiratory sound classification. In International Conference on Biomedical and Health Informatics (pp. 39-43). Springer, Singapore.
Kochetov, K., Putin, E., Balashov, M., Filchenkov, A., & Shalyto, A. (2018, October). Noise masking recurrent neural network for respiratory sound classification. In International Conference on Artificial Neural Networks (pp. 208-217). Springer, Cham.
Koutini, K., Eghbal-zadeh, H., & Widmer, G. (2021). Receptive field regularization techniques for audio classification and tagging with deep convolutional neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 1987-2000.
Lin, X., Liu, J., & Kang, X. (2016). Audio recapture detection with convolutional neural networks. IEEE Transactions on Multimedia, 18(8), 1480-1487.
Peeters, G., & Richard, G. (2021). Deep Learning for Audio and Music. In Multi-Faceted Deep Learning (pp. 231-266). Springer, Cham.
Perna, D. (2018, December). Convolutional neural networks learning from respiratory data. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (pp. 2109-2113). IEEE.
Pramono, R. X. A., Imtiaz, S. A., & Rodriguez-Villegas, E. (2019). Evaluation of features for classification of wheezes and normal respiratory sounds. PloS one, 14(3), e0213659.
Rong, F. (2016, December). Audio classification method based on machine learning. In 2016 International conference on intelligent transportation, big data & smart city (ICITBS) (pp. 81-84). IEEE.
Ryu, H., Park, J., & Shin, H. (2016, September). Classification of heart sound recordings using convolution neural network. In 2016 Computing in Cardiology Conference (CinC) (pp. 1153-1156). IEEE.
Salamon, J., & Bello, J. P. (2017). Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal processing letters, 24(3), 279-283.
Suryasa, I. W., Rodríguez-Gámez, M., & Koldoris, T. (2021). Health and treatment of diabetes mellitus. International Journal of Health Sciences, 5(1), i-v. https://doi.org/10.53730/ijhs.v5n1.2864
Wang, Z., Chen, L., Wang, L., & Diao, G. (2020). Recognition of audio depression based on convolutional neural network and generative antagonism network model. IEEE Access, 8, 101181-101191.
Published
How to Cite
Issue
Section
Copyright (c) 2022 International journal of health sciences

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Articles published in the International Journal of Health Sciences (IJHS) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant IJHS right of first publication under CC BY-NC-ND 4.0. Users have the right to read, download, copy, distribute, print, search, or link to the full texts of articles in this journal, and to use them for any other lawful purpose.
Articles published in IJHS can be copied, communicated and shared in their published form for non-commercial purposes provided full attribution is given to the author and the journal. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
This copyright notice applies to articles published in IJHS volumes 4 onwards. Please read about the copyright notices for previous volumes under Journal History.








