International Islamic University Chittagong, Bangladesh
International Islamic University Chittagong, Bangladesh
International Islamic University Chittagong, Bangladesh
International Islamic University Chittagong, Bangladesh
International Islamic University Chittagong, Bangladesh
* Corresponding author

Article Main Content

Our research presents a system designed to empower individuals who are deaf or vocally impaired by enabling seamless communication through sign language recognition. The system integrates advanced sensor technology, data processing, and machine learning to translate hand and finger movements into understandable gestures. One accelerometer and five flex sensors are strategically placed on the fingers to capture precise movements, which are then transmitted to a receiver unit. The data is processed using a MATLAB-based application that employs various machine learning models, including Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), and Ensemble methods. The system is trained on a dataset generated from these sensor readings, with each model evaluated for its accuracy in gesture recognition. Among the tested models, the Ensemble method achieved the highest classification accuracy of 94.6%, making it the most effective for real-time sign language recognition. This system not only bridges the communication gap for deaf- mute individuals but also represents a significant step forward in creating more inclusive technologies for society.

Introduction

There are many deaf and vocally impaired (VI) people in the world. Sign language (SL) becomes their primary means to express feelings, ideas, and thoughts, using a combination of hand movements, postures, and facial expressions to facilitate communication. Sign language comprises three elements: manual features (hand gestures), non-manual features (facial expressions and body posture), and finger spelling (spelling out words from the local language) [1]–[3]. There are two main approaches to sign language which are based on inputs. They are vision based and sensor- based methods [4]. In vision-based techniques use camera-captured images, eliminating the need for additional sensors or specialized gloves. In contrast, sensor-based methods rely on gloves or sensors to recognize signs. Sign language gestures fall into two categories: image based and motion-based. Key recognition elements include finger-spelling, non-manual features (e.g., facial expressions), and word-level signs [5]. In sign language, communication primarily involves the upper body (waist up), especially hand and finger movements. Globally, 70–90 million people have speech impairments, while in Bangladesh alone, around 2.4 million use sign language—a number that continues to rise [6]. The World Federation of the Deaf estimates that more than 70 million people worldwide use more than 300 sign languages [7]. On the other hand, it is estimated that there are approximately 466 million deaf and mute people in the world, with 34 million of them being children [8]. The World Health Organization estimates that there are over 70 million deaf and silent people in the world. 32 million of the 360 million deaf people in the globe are young children. Moreover, by 2050, it is predicted that one in four people would have hearing loss of some kind [9]. For Deaf-mute people, some have made models by direct object detection or object movement. Like controlling any object through DC motor controlling [10], Home appliance [11]–[13] etc. Besides, among the vision-based models, the most used models are hand posture recognition [1], [14], lip posture, body posture [15], head movement [16], image processing [17] etc. There is so many research has been conducted on designing smart systems that can convert sign language into speech. Approximately 20 years ago, researchers began studying automatic sign language recognition (SLR), with a focus on American Sign Language [18], Australian [19], and Korean [20] sign languages. Since then, numerous Arabic [21], British [22],Chinese [23], French [24] and German [25] systems have been created. But very few attempts are made for the conversion from Bangla Sign Language (BSL) to Speech.

Previous research related to sign language is based on either sensors or algorithms. The authors in [26] developed a smart hand gesture recognition system using the wearable Magic Ring, which translates gestures into Japanese sign language. Using the KNN algorithm, the system achieves 85.2% accuracy with both hands, compared to 77.4% with just the right hand [27]. Starner and Pentland [28] developed a hand motion detection-based machine learning algorithm where a webcam captures input video. The video is pre-processed to enhance the RGB color space and convert it to YCbCr. However, sometimes the system fails to detect the motion of the hand, leap, or other parts of the body. To identify Indian Sign Language, an sEMG system uses a CRF algorithm, achieving 74.33% accuracy for hand gestures with five sensors [29]. A wearable glove-based device was proposed by the authors in [30], enabling real-time recognition of English alphabet hand gestures in sign language. This research develops an Android app that translates American Sign Language into text in real time. The app captures images via the smartphone camera, segments the skin using YCbCr, extracts feature with HOG, and classifies signs using an SVM, achieving 89.54% accuracy [31]. This Hatibaruah et al. [32] developed a developed a sign language interpreter that translates gestures into text on a display. Using histogram backprojection for image segmentation and CNNs trained on an Indian Sign Language database of 26 alphabets and 10 digits, the system achieved a testing accuracy of 89.89%. Abiyev et al. [33] developed an American Sign Language translator to assist mute individuals, using SSD for hand detection and a CNN algorithm to translate signs into text using a fingerspelling dataset. The author proposed a real-time sign language translator using R-CNN, achieving 92.25% accuracy in converting diverse sign language gestures into voice outputs [34]. This research presents a bidirectional sign language translation system using an NLP-based deep learning approach to convert gestures to audio and spoken words to animated 3D signs [35]. Soji and Kamalakannan [36] enhanced a CNN-based model for Indian Sign Language recognition, focusing on message-conveying gestures, and achieved 90.1% accuracy. To enhanced communication a hand gesture recognition system for sign language integrates FFNN and HMM algorithms with voice processing capabilities [37]. Lu et al. [38] developed a YoBu glove to identify gesture movements, using an ELM kernel-based algorithm with 18 IMU sensors. The system identified gestures from 54 extracted features and achieved 91.2% accuracy. Sriram and Nithiyanandham [39] introduced introduced a glove with 5 accelerometers and Bluetooth that decodes ASL gestures through axis orientation and mobile app converting motions to text and speech. This research uses a data glove with tilt sensors, an accelerometer and Bluetooth translates Malaysian Sign Language gestures [40]. The author proposed the YOLOv5s algorithm to detect hand gesture movements, achieving 92.30% accuracy in recognizing 29 letters of Turkish Sign Language [41]. The author proposed the realization of Turkish Sign Language expressions using a humanoid robotic hand [42]. The author used flashcards with Turkish words and pictures to elicit sign language expressions. Results showed that 13-year-olds understood both the words and pictures, while 8-year-olds only recognized the pictures and not the written words or their meanings [43].

Most of the previous research has largely focused on image processing for sign language recognition, but sensor-based models, despite their challenges, offer greater accuracy by avoiding issues like background noise and gesture errors. We designed a system for speech-impaired users, using a machine learning model trained on data from flex sensors and an accelerometer. The receiver unit contains two microcontrollers: the first microcontroller receives data from the transmitter, which are then communicated to the second microcontroller via the I2C protocol. The data is collected and plotted in MATLAB for specific signs, and this process is repeated to gather sufficient training data.

Research Method

System Block Diagram

In Fig. 1 the system block diagram and in Fig. 2, the simplified block diagram for the data acquisition system is presented. The system utilizes flex sensors, a 3-axis accelerometer, two Arduino Nanos, an Arduino Uno, a push button, a buzzer, two Bluetooth modules, and a MATLAB app. The system is divided into two main parts: the transmitter and the receiver units. The transmitter unit includes a flex sensor, a 3-axis accelerometer, an Arduino Nano, a push button, a buzzer, and a Bluetooth master module. It captures data from the sensors, which is then processed by Microcontroller-1 and transmitted via Bluetooth. The receiver unit, equipped with a Bluetooth slave module, receives the data and forwards it to Microcontroller-2, which then passes it to Microcontroller-3 using the I2C protocol. The data is subsequently sent to the MATLAB app for visualization, where it’s plotted as curves. If the MATLAB App program is correctly configured, the data is automatically saved to an Excel file. This data is then used to train a machine learning model, making the system fully functional.

Fig. 1. Block diagram of our system.

Fig. 2. Simplified block diagram for the data acquisition system.

Circuit Diagram of the System

In Fig. 3, the transmitter circuit includes a push button, buzzer, Bluetooth master module, Arduino Nano, five flex sensors, an accelerometer, and a battery. The flex sensors and accelerometer are mounted on the arm; the accelerometer captures hand movements, while the flex sensors detect finger positions. Together, they generate data sent to Microcontroller-1, which transmits it via its TX pin to the Bluetooth module. The Bluetooth master module then sends this data to the receiver unit’s slave module. The transmitter is powered by a battery, though it was initially connected directly to a laptop. Pressing the push button activates the buzzer, signaling the start of data collection. A second beep indicates the end of data input, after which the data is transmitted to the receiver. In Fig. 4, in the receiver section consists of two microcontrollers: an Arduino Nano and an Arduino Uno. The Arduino Nano is connected to the Bluetooth slave module via its RX pin. When the transmitter’s Bluetooth master sends data, the receiver’s Bluetooth module captures it and sends it to Microcontroller-2, which then passes it to Microcontroller-3 via the I2C protocol. The data is then transmitted in digital format to the MATLAB App for further processing.

Fig. 3. Circuit diagram for transmitter unit.

Fig. 4. Circuit diagram for receiver unit.

System Algorithm

In Fig. 5, the transmitter processes 257 data points from sensors: 256 from the accelerometer and one from the flex sensor. For analysis, 25 data points are sampled for various accelerometer positions, while the flex sensor value remains nearly constant. After collecting the data, the Bluetooth master module transmits it to the receiver. Processing starts when data arrive from the transmitter unit to the receiver unit, as illustrated in Fig. 6. The data is then sent to the MATLAB app through the I2C protocol.

Fig. 5. Flow chart of data acquisition and transmitter unit.

Fig. 6. Flow chart of receiver unit.

Implementation, Results and Discussion

This section describes the development of the prototype, the testing of the system, and the analysis of the results.

Protype

In Fig. 7, illustrates the transmitter section of our system. The image on the right showcases the wearable device, while the image on the left highlights the transmitter section. Vocally impaired individuals use a wearable glove equipped with a 3-axis accelerometer to detect gesture movements. This sophisticated sensing device measures acceleration across three perpendicular axes (X, Y, Z), enabling precise detection of motion, orientation, and vibration in three-dimensional space. The core of the device is a MEMS (Micro-Electro-Mechanical System) sensor that converts physical movement into electrical signals, which are processed by a microcontroller. A flex sensor is also included to measure the amount of bending. The prototype typically includes a power management system and communication interfaces such as I2C or SPI. The accelerometer data is sampled across various positions, with the flex sensor value remaining relatively stable. Once the data is collected, the Bluetooth master module receives it from the transmitter section for further analysis. In Fig. 8, depicts the transmitter section of our system. The data is transmitted in digital format to the MATLAB app for subsequent processing.

Fig. 7. Transmitter section.

Fig. 8. Receiver section.

Training of Classification Model

Each sign language gesture in the dataset contains 25 occurrences. Since there are 20 different sign languages, the dataset comprises a total of 500 occurrences.

Each sign language gesture in the dataset contains 25 occurrences. Since there are 20 different sign languages, the data set comprises a total of 500 occurrences. Each data instance includes a flex sensor vector along with three vectors that provide acceleration data in the x, y, and z directions. Each vector contains 257 samples. The graphically depicts information about the sign language gestures for “Hi,” “How are you,” “Good,” “Watch Me,” “You,” “Come,” “Nice,” “Thirsty,” “Sit,” “Stand Up,” “No,” “Water,” “Call Me,” “Goodbye,” “Smile,” “Victory,” “Sorry,” “Sick,” “Change,” and “Love.” In Figs. 913 show four situations in that order.

Fig. 9. Case 1 HI (left) Raw sensor data and generated pulses after the thresholding operation (right) execution of hand finger commands in the hardware.

Fig. 10. Case 2 How are you (left) Generated pulses (right) Hand finger command.

Fig. 11. Case 3 good (left) Generated pulses (right) Hand finger command.

Fig. 12. Case 4 watch me (left) Generated pulses (right) Hand finger command.

Fig. 13. Case 5 You (left) Generated pulses (right) Hand finger command.

Data Acquisition Process in MATLAB

This section details the data acquisition process in MAT- LAB. We employed a dataset comprising 20 distinct sign languages, resulting in 20 unique data curves for each corresponding signal. For demonstration purposes, we include representative curves: Fig. 14, “Thirsty.” Each sign language gesture was sampled 25 times, leading to a total dataset of 20 × 25 = 500 samples are demonstrate.

Fig. 14. MATLAB date receiving for thirsty.

Feature Plot

In Fig. 15, presents a plot comparing two features: DCx, representing the average x-value, and DCy, representing the average y-value. This plot illustrates the relationship between the average x and y coordinates. The dataset consists of 20 sign languages, with the samples displayed as clusters. In most cases, the data points are tightly grouped, indicating that the output is accurate. However, a few outliers have deviated from their respective clusters, representing errors. Despite these outliers, the plot suggests that the overall accuracy of our model is robust.

Fig. 15. Feature plot.

Predicting Plot

Predictive modeling is a widely used statistical method for forecasting behavior. This data-mining technique, known as predictive modeling solutions, constructs a model by analyzing historical and current data to predict future outcomes. In Fig. 16, some data points are marked with cross symbols, indicating errors in our model. However, the majority of commands were executed correctly, demonstrating that most samples were accurately identified. Data points that appear distant from the clusters are typically considered inaccurate, as they represent instances where identification was unsuccessful.

Fig. 16. Prediction model.

ROC Curve

The ROC curves for two of the 20 sign languages analyzed. Fig. 17, illustrates the ROC curve for the “Victory” sign, which achieved 100% accuracy, as all decisions were correct. In contrast, Fig. 18, shows the ROC curve for the “Come” sign, where some decisions were incorrect, resulting in less than 100% accuracy. This is reflected in the slight bend in the curve.

Fig. 17. ROC curve for ‘Victory.’

Fig. 18. ROC curve for ‘Come.’

Confusion Matrix

In Fig. 19, presents the confusion matrix, which pro- vides insight into the accuracy of our model by showing the frequency of correct and incorrect classifications. For instance, out of 25 samples for the “Come” sign, the model correctly identified 24 instances but mistakenly classified one as “Smile.” Similarly, the matrix shows how the model performed for other signs like “Call Me,” “Change,” “Good,” and “Goodbye.” Notably, the “Victory” sign was accurately detected in all 25 instances, while the “Watch Me” sign had the most misclassifications, with the model incorrectly identifying it as “Good” twice, “Love” once, “Nice” once, “Sick” twice, “Smile” twice, “Thirsty” once, and “Water” once. The confusion matrix is crucial for evaluating the accuracy and reliability of a model.

Fig. 19. Confusion matrix.

Discussion

Our model addresses the challenges faced by the deaf- mute community, achieving a remarkable accuracy of 94.6%, which significantly surpasses the 75%–90% accuracy range of existing models. We employed five flex sensors per finger to capture hand bending and an accelerometer for hand motion detection, covering 20 sign languages with 25 samples each, totaling 500 samples. In Table I, the gesture recognition accuracy of the tested models is demonstrated, and Table II shows a comparison of previous research accuracies, indicating that our model has the highest accuracy among all.

SL Model Classification model Accuracy (%)
1 SVM Linear SVM 90.6
2 SVM Quadratic SVM 90.2
3 SVM Cubic SVM 88.6
4 SVM Medium Gaussian SVM 88.0
5 KNN Weighted KNN 87.0
6 Linear discriminant Linear discriminant 94.0
7 Ensemble Bagged trees 94.6
8 Ensemble Subspace discriminant 91.8
Table I. Gesture Recognition Accuracy
Sign language Models Interface Type Accuracy (%)
Australian [19] IBL Glove Gestures 80.0
Arabic [21] HMM Image Word 90.6
British [22] FVs Image Word 87.67
Chinese [23] HMM Glove, Image Word 86.3
French [24] HMM Glove, Image Gestures 81.6
German [25] HMM Image Word 81.0
Japanese [26] KNN Ring, Motion Word 85.20
Indian [28] CRF Glove Word 74.33
English [30] Glove Word
American [31] SVM Image Word 89.54
Indian [32] CNN Image Word 89.89
American [33] CNN Image Word
[34] R-CNN Image Voice 92.25
Indian [36] CNN Image Word 90.1
[37] HMM Image Voice 80.0
Lu et al. [38] ELM Glove, Motion Numbers 91.2
American [39] OpenCV Glove, Motion Alphabet
Malaysian [40] HMM Glove, tilt Gestures 78.3
Turkish [41] YOLOv5s Hand gesture Word 92.3
Turkish [42] Hand gesture Gestures
Turkish [43] Hand gesture Gestures
Propose EBT Glove, Motion Sign 94.6
Table II. Comparison with Previous Research

Conclusion

Our Research are developed a sign language recognition system that incorporates flex sensors on each finger and an accelerometer on the hand. These sensors capture hand gestures, translating them into sign language by measuring finger angles and hand slopes. It has some limitations, including sensitivity to temperature changes, which can affect the accuracy of the output. Furthermore, the Blue- tooth module used presents challenges such as slower data transfer rates, limited range, and interference, which affects the overall system performance. A data acquisition circuit for one hand, scalable to dual-hand for advanced functionality in the future.

References

  1. Dahmani D, Larabi S. User-independent system for sign language finger spelling recognition. J Vis Commun Image Represent. 2014 Jul 1;25(5):1240–50.
     Google Scholar
  2. Lee BG, Lee SM. Smart wearable hand device for sign language interpretation system with sensors fusion. IEEE Sens J. 2017 Dec 4;18(3):1224–32.
     Google Scholar
  3. Cheok MJ, Omar Z, Jaward MH. A review of hand gesture and sign language recognition techniques. Int J Mach Learn Cybern. 2019 Jan 31;10:131–53.
     Google Scholar
  4. Youme SK, Chowdhury TA, Ahamed H, Abid MS, Chowdhury L, Mohammed N. Generalization of bangla sign language recognition using angular loss functions. IEEE Access. 2021 Dec 10;9: 165351–65.
     Google Scholar
  5. Shurid SA, Amin KH, Mirbahar MS, Karmaker D, Mahtab MT, Khan FT, et al. Bangla sign language recognition and sentence building using deep learning. 2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), pp. 1–9, IEEE, 2020 Dec 16.
     Google Scholar
  6. Sarker S, Hoque MM. An intelligent system for conversion of bangla sign language into speech. 2018 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 513–8, IEEE, 2018 Oct 27.
     Google Scholar
  7. Adeyanju IA, Bello OO, Adegboye MA. Machine learning methods for sign language recognition: a critical review and analysis. Intell Syst Applicat. 2021 Nov 1;12:200056.
     Google Scholar
  8. Haider I, Mehdi MA, Amin A, Nisar K. A hand gesture recognition based communication system for mute people. 2020 IEEE 23rd International Multitopic Conference (INMIC), pp. 1–6, IEEE, 2020 Nov 5.
     Google Scholar
  9. Mohamad NB. A UX evaluation model of hearing-impaired children’s mobile learning applications.
     Google Scholar
  10. Kader MA, Hasan MJ, Emon MA, Karim A, Mahmud S, Tahsin T. Hand gesture based speed and direction control of DC motor using machine learning algorithm. 2022 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 284–9, IEEE, 2022 Feb 26.
     Google Scholar
  11. Kader MA, Orna SS, Tasnim Z, Hassain MM. Wireless need sharing and home appliance control for quadriplegic patients using head motion detection via 3-axis accelerometer. Indones J Electr Eng Inform. 2024 Sep 11;12(3):558–74.
     Google Scholar
  12. Kader MA, Akter Z, Fatema K, Akter M, Hassain MM. Head motion controlled mouse with home appliance control for quadriplegic patient. 2024 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 1–6, IEEE, 2024 Oct 26.
     Google Scholar
  13. Muranaka Y, Al-Sada M, Nakajima T. A home appliance control system with hand gesture based on pose estimation. 2020 IEEE 9th Global Conference on Consumer Electronics (GCCE), pp. 752–5, IEEE, 2020 Oct 13.
     Google Scholar
  14. Hassain MM. IoT based smart walking stick for enhanced mobility of the visually impaired. 2024 International Conference on Innova- tions in Science, Engineering and Technology (ICISET), pp. 1–6, IEEE, 2024 Oct 26.
     Google Scholar
  15. Tunga A, Nuthalapati SV, Wachs J. Pose-based sign language recognition using GCN and BERT. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 31–40, 2021.
     Google Scholar
  16. Hassain MM, Mazumder MF, Arefin MR, Kader MA. Design and implementation of smart head-motion controlled wheelchair. 2023 IEEE Engineering Informatics, pp. 1–8, IEEE, 2023 Nov 22.
     Google Scholar
  17. Mohandes M, Deriche M, Liu J. Image-based and sensor-based approaches to Arabic sign language recognition. IEEE Trans Hum Mach Syst. 2014 May 12;44(4):551–7.
     Google Scholar
  18. Starner T, Pentland A. Real-time american sign language recognition from video using hidden markov models. Proceedings of International Symposium on Computer Vision-ISCV , pp. 265–70, IEEE, 1995 Nov 21.
     Google Scholar
  19. Kadous MW. Machine recognition of Auslan signs using Pow- erGloves: towards large-lexicon recognition of sign language. Proceedings of the Workshop on the Integration of Gesture in Language and Speech, vol. 165, pp. 165–74, Wilmington: DE, 1996 Oct.
     Google Scholar
  20. Kim JS, Jang W, Bien Z. A dynamic gesture recognition system for the Korean sign language (KSL). IEEE Transact Syst, Man, Cybernet, Part B (Cybernetics). 1996 Apr;26(2):354–9.
     Google Scholar
  21. Al-Rousan M, Assaleh K, Tala’a A. Video-based signer-independent Arabic sign language recognition using hidden Markov models. Appl Soft Comput. 2009 Jun 1;9(3):990–9.
     Google Scholar
  22. Bowden R, Windridge D, Kadir T, Zisserman A, Brady M. A linguistic feature vector for the visual interpretation of sign language. Computer Vision-ECCV 2004: 8th European Conference on Computer Vision, Proceedings, Part I 8 2004, pp. 390–401, Prague, Czech Republic, Berlin Heidelberg: Springer, 2004 May 11–14.
     Google Scholar
  23. Gao W, Fang G, Zhao D, Chen Y. A Chinese sign language recognition system based on SOFM/SRN/HMM. Pattern Recognit. 2004 Dec 1;37(12):2389–402.
     Google Scholar
  24. Aran O, Burger T, Caplier A, Akarun L. A belief-based sequential fusion approach for fusing manual signs and non-manual signals. Pattern Recognit. 2009 May 1;42(5):812–22.
     Google Scholar
  25. Bauer B, Kraiss KF. Video-based sign recognition using self-organizing subunits. 2002 International Conference on Pattern Recognition, vol. 2, pp. 434–7, IEEE, 2002 Aug 11.
     Google Scholar
  26. Kuroki K, Zhou Y, Cheng Z, Lu Z, Zhou Y, Jing L. A remote conversation support system for deaf-mute persons based on bimanual gestures recognition using finger-worn devices. 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom workshops), pp. 574–8, IEEE, 2015 Mar 23.
     Google Scholar
  27. Kudrinko K, Flavin E, Zhu X, Li Q. Wearable sensor-based sign language recognition: a comprehensive review. IEEE Rev Biomed Eng. 2020 Aug 26;14:82–97.
     Google Scholar
  28. Choudhury A, Talukdar AK, Sarma KK. A conditional random field based Indian sign language recognition system under complex background. 2014 Fourth International Conference on Communication Systems and Network Technologies, pp. 900–4, IEEE, 2014 Apr 7.
     Google Scholar
  29. Gupta R. On the selection of number of sensors for a wearable sign language recognition system. 2019 Twelfth International Conference on Contemporary Computing (IC3), pp. 1–6, IEEE, 2019 Aug 8.
     Google Scholar
  30. Jani AB, Kotak NA, Roy AK. Sensor based hand gesture recognition system for English alphabets used in sign language of deaf-mute people. 2018 IEEE SENSORS, pp. 1–4, IEEE, 2018 Oct 28.
     Google Scholar
  31. Lahoti S, Kayal S, Kumbhare S, Suradkar I, Pawar V. Android based American sign language recognition system with skin segmentation and SVM. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pp. 1–6, IEEE, 2018 Jul 10.
     Google Scholar
  32. Hatibaruah D, Talukdar AK, Sarma KK. A static hand gesture based sign language recognition system using convolutional neural networks. 2020 IEEE 17th India Council International Conference (INDICON), pp. 1–6, IEEE, 2020 Dec 10.
     Google Scholar
  33. Abiyev R, Idoko JB, Arslan M. Reconstruction of convolutional neural network for sign language recognition. 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), pp. 1–5, IEEE, 2020 Jun 12.
     Google Scholar
  34. Sharon E, Paulraj GJ, Jebadurai IJ, Merlin C. Sign language translation to natural voice output: a machine learning perspective. 2024 International Conference on Cognitive Robotics and Intelligent Systems (ICC-ROBINS), pp. 332–9, IEEE, 2024 Apr 17.
     Google Scholar
  35. Valarmathi R, Surya PJ, Balaji P, Ashik K. Animated sign language for people with speaking and hearing disability using deep learning. 2024 International Conference on Communication, Computing and Internet of Things (IC3IoT), pp. 1–5, IEEE, 2024 Apr 17.
     Google Scholar
  36. Soji ES, Kamalakannan T. Efficient Indian sign language recognition and classification using enhanced machine learning approach. Int J Critic Infrastruct. 2024;20(2):125–38.
     Google Scholar
  37. Pandey A, Chauhan A, Gupta A. Voice based sign language detection for dumb people communication using machine learning. J Pharm Negat? Results. 2023 Apr 1;14(2).
     Google Scholar
  38. Lu D, Yu Y, Liu H. Gesture recognition using data glove: an extreme learning machine method. 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1349–54, IEEE, 2016 Dec 3.
     Google Scholar
  39. Sriram N, Nithiyanandham M. A hand gesture recognition based communication system for silent speakers. 2013 International Conference on Human Computer Interactions (ICHCI), pp. 1–5, IEEE, 2013 Aug 23.
     Google Scholar
  40. Shukor AZ, Miskon MF, Jamaluddin MH, Bin Ali F, Asyraf MF, Bin Bahar MB. A new data glove approach for Malaysian sign language detection. Procedia Comput Sci. 2015 Jan 1;76:60–7.
     Google Scholar
  41. Bankur F, Kaya M. Deep learning based recognition of Turkish sign language letters with unique data set. Turkish J Sci Technol. 2022;17(2):251–60.
     Google Scholar
  42. Gül M. Realization of Turkish sign language expressions with the developed humanoid robot. Veri Bilimi. 2021 Aug 8;4(2):80–4.
     Google Scholar
  43. Mahmoudi F. Acquisition of sign language and literacy skills. Int J Educat Spect. 2019 Jul 8;1(2):70–87.
     Google Scholar