Amharic Sign Language Data Sets
Description
Hearing-impaired people use Sign Language to communicate with each other as well as with other communities. Usually, they are unable to communicate with normal people. Most of the people without hearing disability do not understand the Sign Language and unable to understand hearing-impaired people. So, they need recognition of Sign Language to text. A data set is collected from different teachers of sign language for recognition of Amharic Sign Language to Amharic characters. After gathering data with different backgrounds and positions of hands, the data is prepared for feeding to machine learning. The initial step for preprocessing data is frame extraction of video followed by resizing data and feature extraction. By using the LabelImg tool we annotated this frame of data to create XML, TFrecord, and CSV. Finally, we fed the data into two different models: Faster R-CNN and Single-Shot Multibox (SSD) detector. From the result of the models, we justified for Amharic Sign Language recognition to characters the SSD better in accuracy than Faster R-CNN but Faster RCNN is good with accuracy. Anyone can use the data for recognizing Amharic Sign Language to Amharic characters from the image, video, and Real-time. We used 10 class for recognizing the characters, the research will continue to include the remaining six orders, words, and sentences used in Sign Language to have a full-fledged Sign Language recognition model to a complete system.
Files
Steps to reproduce
The initial step for preprocessing data is frame extraction of video followed by resizing data and feature extraction. By using the LabelImg tool we annotated this frame of data to create XML, TFrecord, and CSV. Finally, we fed the data into two different models: Faster R-CNN and Single-Shot Multibox (SSD) detector. From the result of the models, we justified for Amharic Sign Language recognition to characters the SSD better in accuracy than Faster R-CNN but Faster RCNN is good with accuracy.