Deep transfer learning base on sequenced edge grid image technique for sign language recognition
Keywords:
sequenced edge grid image, sign language recognition, convolutional neural network, transfer learningAbstract
Sign language is a visual-gestural language used by hearing impaired person, they modality the gesture to convey meaning. The main problem with sign language communication is ordinary people do not understand the sign language. Therefore, sign language is one of the challenging problems in machine learning. In this paper, researchers focus on visual-based methods and optimize the data preprocessing apply with existing sign language resources. Researchers propose an innovative technique for video processing called Sequenced Edge Grid Images (SEGI) for sign language recognition to interpret hand gesture, body movement, and facial expression. Researchers collected several of sign language data from the internet, the data including Thai sign language utilize in everyday life. The proposed technique was implemented with a convolutional neural network (CNN). The experiments showed SEGI with CNN has increases test accuracy rate with approximately 11% compared to static hand gesture images. Finally, researchers discovered a CNN structure suitable for dataset and examination data by transferring a pre-trained CNN. The fine-tuning with SEGI technique improved 99.8%, thus highest among all the methods. From the results data-preprocesses technique of dataset generation and deep transfer learning was an effective way to improve the accuracy of sign language recognition.
Downloads
References
A. Neyra-Gutiérrez and P. Shiguihara-Juárez., (2020). Feature Extraction with Video Summarization of Dynamic Gestures for Peruvian Sign Language Recognition, IEEE XXVII International Conference on Electronics, Electrical Engineering and Computing (INTERCON), Sep. 2020, pp. 1–4. doi: 10.1109/INTERCON50315.2020.9220243.
A. TANG, K. LU, Y. WANG, J. HUANG, and H. LI., (2015). A Real-Time Hand Posture Recognition System UsingDeep Neural Networks, ACM Trans. Intell. Syst. Technol., vol. 6, no. 2, pp. 1–23, Mar. 2015.
D. Avola, M. Bernardi, L. Cinque, G. L. Foresti, and C. Massaroni., (2019). Exploiting Recurrent Neural Networks and Leap Motion Controller for the Recognition of Sign Language and Semaphoric Hand Gestures, IEEE Trans. Multimed., vol. 21, no. 1, pp. 234–245, Jan. 2019, doi: 10.1109/TMM.2018.2856094.
D. S. Breland, S. B. Skriubakken, A. Dayal, A. Jha, P. K. Yalavarthy, and L. R. Cenkeramaddi., (2021). Deep Learning-Based Sign Language Digits Recognition From Thermal Images With Edge Computing System, IEEE Sens. J., vol. 21, no. 9, pp. 10445–10453, May 2021, doi: 10.1109/JSEN.2021.3061608.
H. Bhavsar and J. Trivedi., (2019). Hand Gesture Recognition for Indian Sign Language using Skin Color Detection and Correlation-Coefficient algorithm with Neuro-Fuzzy Approach, International Conference on Advances in Computing, Communication and Control (ICAC3), , pp. 1–5. doi: 10.1109/ICAC347590.2019.9036832.
H. Chang, J. Han, C. Zhong, A. M. Snijders, and J.-H. Mao., (2018). Unsupervised transfer learning via multi-scale convolutional sparse coding for biomedical applications, IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 5, pp. 1182–1194.
H. Cooper, E.-J. Ong, N. Pugeault, and R. Bowden., (2017). Sign Language Recognition Using Sub-units, in Gesture Recognition, S. Escalera, I. Guyon, and V. Athitsos, Eds. Cham: Springer International Publishing, 2017, pp. 89–118. doi: 10.1007/978-3-319-57021-1_3.
I. Makarov, N., et al., (2019). Russian Sign Language Dactyl Recognition, 42nd International Conference on Telecommunications and Signal Processing (TSP), Jul. 2019, pp. 726–729. doi: 10.1109/TSP.2019.8768868.
J. Yosinski, J. Clune, Y. Bengio, and H. Lipson., (2014). How transferable are features in deep neural networks? in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2014, pp. 3320–3328.
K. Revanth and N. S. M. Raja., (2019). Comprehensive SVM based Indian Sign Language Recognition, IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), Mar. 2019, pp. 1–4. doi: 10.1109/ICSCAN.2019.8878787.
K. Zhang, Y. Zhang, P. Wang, Y. Tian, and J. Yang., (2018). An Improved Sobel Edge Algorithm and FPGA Implementation, in Proceedings of the International Conference of Information and Communication Technology- 2018, 2018, vol. 131, pp. 243–248. doi: https://doi.org/10.1016/j.procs.2018.04.209.
M. A. Bencherif et al., (2021). Arabic Sign Language Recognition System Using 2D Hands and Body Skeleton Data, IEEE Access, vol. 9, pp. 59612–59627, 2021, doi: 10.1109/ACCESS.2021.3069714.
M. Long, H. Zhu, J. Wang, and M. I. Jordan., (2016). Unsupervised domain adaptation with residual transfer networks, in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2016, pp. 136–144.
M. Oquab, L. Bottou, I. Laptev, and J. Sivic., (2014). Learning and transferring mid-level image representations using convolutional neural networks, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, pp. 1717–1724.
M. Zadghorban and M. Nahvi., (2018). An algorithm on sign words extraction and recognition of continuous Persian sign language based on motion and shape features of hands, Pattern Anal. Appl., vol. 21, no. 2, Art. no. 2, May 2018, doi: 10.1007/s10044-016-0579-2.
Mustafa, A. R., Ramadany, S., Sanusi, Y., Made, S., Stang, S., & Syarif, S. (2020). Learning media applications for toddler midwifery care about android-based fine motor development in improving midwifery students skills. International Journal of Health & Medical Sciences, 3(1), 130-135. https://doi.org/10.31295/ijhms.v3n1.290
P. PAUDYAL, J. LEE, A. BANERJEE, and S. K. S. GUPTA.,(2019). A Comparison of Techniques for Sign Language Alphabet Recognition Using Armband Wearables, ACM Trans. Interact. Intell. Syst., vol. 9, no. 2–3, pp. 1–26, Apr. 2019.
P. Prathusha, S. Jyothi, and D. M. Mamatha., (2018). Enhanced Image Edge Detection Methods for Crab Species Identification, International Conference on Soft-computing and Network Security (ICSNS), Feb. 2018, pp. 1–7. doi: 10.1109/ICSNS.2018.8573629.
P. V. V. Kishore, D. A. Kumar, A. S. C. S. Sastry, and E. K. Kumar., (2018). Motionlets Matching With Adaptive Kernels for 3-D Indian Sign Language Recognition, IEEE Sens. J., vol. 18, no. 8, pp. 3327–3337, Apr. 2018, doi: 10.1109/JSEN.2018.2810449.
R. Cui, H. Liu, and C. Zhang., (2017). Recurrent Convolutional Neural Networks for Continuous Sign Language Recognition by Staged Optimization, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 1610–1618. doi: 10.1109/CVPR.2017.175.
R. Cui, H. Liu, and C. Zhang., (2019). A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training, IEEE Trans. Multimed., vol. 21, no. 7, Art. no. 7, Jul. 2019, doi: 10.1109/TMM.2018.2889563.
R. Rastgoo, K. Kiani, and S. Escalera., (2020). Hand sign language recognition using multi-view hand skeleton, Expert Syst. Appl., vol. 150, p. 113336, Jul. 2020, doi: 10.1016/j.eswa.2020.113336.
S. J. Pan and Q. Yang., (2010). A survey on transfer learning, IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345– 1359, Oct. 2010
S. Ji, W. Xu, M. Yang, and K. Yu., (2013). 3D Convolutional Neural Networks for Human Action Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, Art. no. 1, doi: 10.1109/TPAMI.2012.59.
Suryasa, I. W., Rodríguez-Gámez, M., & Koldoris, T. (2022). Post-pandemic health and its sustainability: Educational situation. International Journal of Health Sciences, 6(1), i-v. https://doi.org/10.53730/ijhs.v6n1.5949
T. Fujimoto, T. Kawasaki, and K. Kitamura.,(2019). Canny-Edge-Detection/Rankine-Hugoniot-conditions unified shock sensor for inviscid and viscous flows | Elsevier Enhanced Reader, vol. 396, pp. 264–279, Nov. 2019, doi: 10.1016/j.jcp.2019.06.071.
T. Pariwat and P. Seresangtakul., (2017). Thai finger-spelling sign language recognition using global and local features with SVM, 9th International Conference on Knowledge and Smart Technology (KST), Feb. 2017, pp. 116–120. doi: 10.1109/KST.2017.7886111.
W. Aly, S. Aly, and S. Almotairi, (2019). User-Independent American Sign Language Alphabet Recognition Based on Depth Image and PCANet Features,” IEEE Access, vol. 7, pp. 123138–123150, 2019, doi: 10.1109/ACCESS.2019.2938829.
Y. Dong, J. Liu, and W. Yan., (2021). Dynamic Hand Gesture Recognition Based on Signals from Specialized Data Glove and Deep Learning Algorithms, IEEE Trans. Instrum. Meas., vol. 70, pp. 1–14, 2021, doi: 10.1109/TIM.2021.3077967.
Y. Liao, P. Xiong, W. Min, W. Min and J. Lu., (2019). Dynamic Sign Language Recognition Based on Video Sequence with BLSTM-3D Residual Networks, in IEEE Access, vol. 7, pp. 38044-38054, 2019, doi: 10.1109/ACCESS.2019.2904749.
Y. Wang, C. Wang, H. Zhang, Y. Dong, and S. Wei., (2019). A SAR dataset of ship detection for deep learning under complex backgrounds, Remote Sens., vol. 11, no. 7, p. 765, 2019.
Published
How to Cite
Issue
Section
Copyright (c) 2022 International journal of health sciences

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Articles published in the International Journal of Health Sciences (IJHS) are available under Creative Commons Attribution Non-Commercial No Derivatives Licence (CC BY-NC-ND 4.0). Authors retain copyright in their work and grant IJHS right of first publication under CC BY-NC-ND 4.0. Users have the right to read, download, copy, distribute, print, search, or link to the full texts of articles in this journal, and to use them for any other lawful purpose.
Articles published in IJHS can be copied, communicated and shared in their published form for non-commercial purposes provided full attribution is given to the author and the journal. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
This copyright notice applies to articles published in IJHS volumes 4 onwards. Please read about the copyright notices for previous volumes under Journal History.