PkSLMNM: Pakistan Sign Language Manual and Non-Manual Gestures Dataset

Published: 26 September 2022| Version 2 | DOI: 10.17632/m3m9924p3v.2
Sameena Javaid


Cite This Article S. Javaid and S. Rizvi, "A novel action transformer network for hybrid multimodal sign language recognition," Computers, Materials & Continua, vol. 74, no.1, pp. 523–537, 2023. Sign language is a non-verbal form of communication used by people with impaired hearing and speech. They also use facial actions to provide sign language prosody, similar to intonation in spoken languages. Sign Language Recognition (SLR) using hand signs is a typical way, however, face expression and body language play an important role in communication, which has not been analyzed to its fullest potential. In this paper, we present a dataset that comprises manual (hand signs) and non-manual (facial expressions and body movements) gestures of Pakistan Sign Language (PSL). It contains videos of 7 basic affective expressions performed by 100 healthy individuals, presented in an easily accessible format of .MP4 that can be used to train and test systems to make robust models for real-time applications using videos. Current data can also help with facial feature detection, classification of subjects by gender and age, or provide insights into any individual’s interest and emotional state.



Bahria University - Karachi Campus


Computer Vision, Machine Learning, Video Processing, Sign Language, Deep Learning