Indian Sign Language Recognition Using Deep Learning Techniques

  • Karthika Renuka D Department of Information Technology, PSG College of Technology, Coimbatore-641004, Tamil Nadu, India. https://orcid.org/0000-0002-6519-4673
  • Ashok Kumar L Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore-641004, Tamil Nadu, India.
Keywords: Speech Impairment, Voice-to-Sign Language, Multi-stream CNN-LSTM, HMM

Abstract

By automatically translating Indian sign language into English speech, a portable multimedia Indian sign language translation program can help the deaf and/or speaker connect with hearing people. It could act as a translator for those that do not understand sign language, eliminating the need for a mediator and allowing communication to take place in the speaker's native language. As a result, Deaf-Dumb people are denied regular educational opportunities. Uneducated Deaf-Dumb people have a difficult time communicating with members of their culture. We provide an incorporated Android application to help ignorant Deaf-Dumb people fit into society and connect with others. The newly launched program includes a straight forward keyboard translator that really can convert any term from Indian sign language to English. The proposed system is an interactive application program for mobile phones created with application software. The mobile phone is used to photograph Indian sign language gestures, while the operating system performs vision processing tasks and the constructed audio device output signals speech, limiting the need for extra devices and costs. The perceived latency between both the hand signals as well as the translation is reduced by parallel processing. This allows for a very quick translation of finger and hand motions. This is capable of recognizing one-handed sign representations of the numbers 0 through 9. The findings show that the results are highly reproducible, consistent, and accurate.

Metrics

Metrics Loading ...

References

Chakraborty, B.K., Sarma, D., Bhuyan, M.K., & MacDorman, K.F., (2017). Review of constraints on vision‐based gesture recognition for human–computer interaction. IET Computer Vision, 12(1), 3-15. https://doi.org/10.1049/iet-cvi.2017.0052

Shin, J., & Kim, C.M., (2017). Non-touch character input system based on hand tapping gestures using Kinect sensor, IEEE Access, 5, 10496-10505. https://doi.org/10.1109/ACCESS.2017.2703783

Yu, Y., Chen, X., Cao, S., Zhang, X., & Chen, X., (2019). Exploration of Chinese sign language recognition using wearable sensors based on deep belief net, IEEE journal of biomedical and health informatics, 24(5), 1310-1320. https://doi.org/10.1109/JBHI.2019.2941535

Yang, X., Chen, X., Cao, X., Wei, S., & Zhang, X., (2016). Chinese sign language recognition based on an optimized tree-structure framework, IEEE journal of biomedical and health informatics, 21(4), 994-1004. https://doi.org/10.1109/JBHI.2016.2560907

Pan, J., Luo, Y., Li, Y., Tham, C.K., Heng, C.H., & Thean, A.V.Y., (2020). A wireless multi-channel capacitive sensor system for efficient glove-based gesture recognition with AI at the edge, IEEE Transactions on Circuits and Systems II: Express Briefs, 67(9), 1624-1628. https://doi.org/10.1109/TCSII.2020.3010318

Oliveira, T., Escudeiro, N., Escudeiro, P., Rocha, E., & Barbosa, F.M., (2019). The virtualsign channel for the communication between deaf and hearing users, IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, 14(4), 188-195. https://doi.org/10.1109/RITA.2019.2952270

Koller, O., Camgoz, N.C., Ney, H., & Bowden, R., (2019). Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallelism in sign language videos, IEEE transactions on pattern analysis and machine intelligence, 42(9), 2306-2320. https://doi.org/10.1109/TPAMI.2019.2911077

Kim, S.Y., Han, H.G., Kim, J.W., Lee, S., & Kim, T.W., (2017). A hand gesture recognition sensor using reflected impulses, IEEE Sensors Journal, 17(10), 2975-2976. https://doi.org/10.1109/JSEN.2017.2679220

Joshi, G., Vig, R., & Singh, S., (2018). DCA‐based unimodal feature‐level fusion of orthogonal moments for Indian sign language dataset, IET Computer Vision, 12(5), 570-577. https://doi.org/10.1049/iet-cvi.2017.0394

Published
2022-05-12
How to Cite
D, K. R., & L, A. K. (2022). Indian Sign Language Recognition Using Deep Learning Techniques. International Journal of Computer Communication and Informatics, 4(1), 36-42. https://doi.org/10.34256/ijcci2214



Views: Abstract : 112 | PDF : 122

Plum Analytics