Artificial Intelligence with Optimization Algorithm Based Robust Sign Gesture Recognition for Visually Impaired People

Authors

  • Ameer N. Onaizah Beijing Institute Of Technology, School Of Automation, zhongguancun ,Beijing,100811, China

DOI:

https://doi.org/10.63503/j.ijcma.2024.18

Keywords:

Sign language recognition, Computer vision, Metaheuristics, Hyperparameter tuning, Deep belief network

Abstract

Sign language is an effective means of communication with visually impaired individuals, as it can be utilized anywhere. So, gestures play a significant part in communication between deaf and dumb people. They are a form of non-verbal data exchange that has garnered significant attention in the development of Human-Computer Interaction (HCI) models, as they permit consumers to state
themselves intuitively and naturally in dissimilar contexts. Sign gesture detection is the main need of any HCI application, e.g. gaming, virtual reality, and monitoring methods. At present, Computer vision (CV) and artificial intelligence (AI) have been the efficient areas of dynamic research and growth with the advances in the assistive technology field. In this manuscript, we design and develop an Enhanced Sign Gesture Recognition Model for Disabled People Using Advanced Optimization Models (ESGRM-DPAOM). The proposed ESGRM-DPAOM system is to improve the sign gesture recognition solutions for visually impaired individuals. To accomplish that, the proposed ESGRM-DPAOM model initially applies image preprocessing using wiener filtering (WF) to eliminate the noise in input image data. For the feature extraction process, the SqueezeNet model has been employed and an optimal parameter tuning model utilizes pigeon-inspired optimization (PIO). In addition, the proposed ESGRM-DPAOM models involve the classification process
with the aid of the elman neural network (ENN) model. At last, the golden jackal optimizer (GJO) algorithm adjusts the hyperparameter values of the ENN model optimally and outcomes in greater classification performance. Extensive experimentation led to authorizing the performance of the ESGRM-DPAOM approach. The simulation outcomes specified that the ESGRM-DPAOM system emphasized
advancement over other existing methods.

References

[1] Sharma, S. and Singh, S., 2021. Vision-based hand gesture recognition using deep learning for the interpretation of sign language. Expert Systems with Applications, 182, p.115657.

[2] Mujahid, A., Awan, M.J., Yasin, A., Mohammed, M.A., Damaševičius, R., Maskeliūnas, R. and Abdulkareem, K.H., 2021. Real-time hand gesture recognition based on deep learning YOLOv3 model. Applied Sciences, 11(9), p.4164.

[3] Zheng, Z., Wang, Q., Yang, D., Wang, Q., Huang, W. and Xu, Y., 2022. L-sign: Large-vocabulary sign gestures recognition system. IEEE Transactions on Human-Machine Systems, 52(2), pp.290-301.

[4] Juneja, S., Juneja, A., Dhiman, G., Jain, S., Dhankhar, A. and Kautish, S., 2021. Computer vision‐enabled character recognition of hand gestures for patients with hearing and speaking disability. Mobile Information Systems, 2021(1), p.4912486.

[5] Mohamed, N., Mustafa, M.B. and Jomhari, N., 2021. A review of the hand gesture recognition system: Current progress and future directions. IEEE access, 9, pp.157422-157436.

[6] Gangrade, J. and Bharti, J., 2023. Vision-based hand gesture recognition for Indian sign language using convolution neural network. IETE Journal of Research, 69(2), pp.723-732.

[7] Sadeddine, K., Chelali, F.Z., Djeradi, R., Djeradi, A. and Benabderrahmane, S., 2021. Recognition of user-dependent and independent static hand gestures: Application to sign language. Journal of Visual Communication and Image Representation, 79, p.103193.

[8] Tasmere, D., Ahmed, B. and Das, S.R., 2021. Real time hand gesture recognition in depth image using cnn. International Journal of Computer Applications, 174(16), pp.28-32.’

[9] Padmanandam, K., Rajesh, M.V., Upadhyaya, A.N., Chandrashekar, B. and Sah, S., 2022. Artificial intelligence biosensing system on hand gesture recognition for the hearing impaired. International Journal of Operations Research and Information Systems (IJORIS), 13(2), pp.1-13.

[10] Allehaibi, K.H., 2025. Artificial Intelligence based Automated Sign Gesture Recognition Solutions for Visually Challenged People. Journal of Intelligent Systems and Internet of Things, (2), pp.127-27.

[11] Chang, V., Eniola, R.O., Golightly, L. and Xu, Q.A., 2023. An Exploration into Human–Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment. SN Computer Science, 4(5), p.441.

[12] Alashhab, S., Gallego, A.J. and Lozano, M.Á., 2022. Efficient gesture recognition for the assistance of visually impaired people using multi-head neural networks. Engineering Applications of Artificial Intelligence, 114, p.105188.

[13] Lindner, T., Wyrwał, D. and Milecki, A., 2023. An autonomous humanoid robot designed to assist a human with a gesture recognition system. Electronics, 12(12), p.2652.

[14] Gupta, R., Oza, D. and Chaudhari, S., 2022. Real-Time Hand Tracking and Gesture Recognizing Communication System for Physically Disabled People. In Inventive Communication and Computational Technologies: Proceedings of ICICCT 2021 (pp. 731-746). Springer Singapore.

[15] Amangeldy, N., Milosz, M., Kudubayeva, S., Kassymova, A., Kalakova, G. and Zhetkenbay, L., 2023. A Real-Time Dynamic Gesture Variability Recognition Method Based on Convolutional Neural Networks. Applied Sciences, 13(19), p.10799.

[16] de Oliveira, G.A., Oliveira, O.D.F., de Abreu, S., de Bettio, R.W. and Freire, A.P., 2022. Opportunities and accessibility challenges for open-source general-purpose home automation mobile applications for visually disabled users. Multimedia Tools and Applications, 81(8), pp.10695-10722.

[17] Sruthi, C.J. and Lijiya, A., 2023. Double-handed dynamic gesture recognition using contour-based hand tracking and maximum mean probability ensembling (MMPE) for Indian Sign language. The Visual Computer, 39(12), pp.6183-6203.

[18] Göreke, V., 2023. A novel method based on Wiener filter for denoising Poisson noise from medical X-Ray images. Biomedical Signal Processing and Control, 79, p.104031.

[19] Safie, S.I. and Ramli, R., 2023. Footprint biometric authentication using SqueezeNet. Indonesian Journal of Electrical Engineering and Computer Science, 31(2), pp.893-901. [20] Yu, Y., Wang, Y., Xu, D., Dou, Z. and Yang, M., 2023. Research on charging and discharging strategy of electric vehicles in park micro-grid based on pigeon-inspired optimization algorithm. International Journal of Innovative Computing, Information and Control, 19(3), pp.721-735.

[21] Yang, M. and Liu, Y., 2023. Research on the potential for China to achieve carbon neutrality: A hybrid prediction model integrated with elman neural network and sparrow search algorithm. Journal of Environmental Management, 329, p.117081.

[22] Zhang, J., Zhang, G., Kong, M. and Zhang, T., 2023. Adaptive infinite impulse response system identification using an enhanced golden jackal optimization. The Journal of Supercomputing, 79(10), pp.10823-10848.

[23] https://www.kaggle.com/datasets/ayuraj/asl-dataset

[24] Wadhawan, A. and Kumar, P., 2020. Deep learning-based sign language recognition system for static signs. Neural computing and applications, 32(12), pp.7957-7968.

[25] Mannan, A., Abbasi, A., Javed, A.R., Ahsan, A., Gadekallu, T.R. and Xin, Q., 2022. Hypertuned deep convolutional neural network for sign language recognition. Computational Intelligence and Neuroscience, 2022.

Downloads

Published

2024-07-31

How to Cite

Ameer N. Onaizah. (2024). Artificial Intelligence with Optimization Algorithm Based Robust Sign Gesture Recognition for Visually Impaired People. International Journal on Computational Modelling Applications, 1(1), 45–62. https://doi.org/10.63503/j.ijcma.2024.18

Issue

Section

Research Articles