Indian Sign Language_Dataset

Published: 14 November 2024| Version 1 | DOI: 10.17632/yx7kdssfjp.1
Contributor:
RAMNATH M

Description

The ISL (Indian Sign Language) dataset used for training and evaluating sign language recognition models is typically composed of video samples capturing various hand gestures representing specific words or phrases. This dataset aims to encapsulate the complexity and diversity of ISL, accommodating a wide range of commonly used signs to ensure comprehensive coverage for robust training. Dataset Composition The ISL dataset generally includes: Video Samples: Short video clips where signers perform specific signs or sequences of signs. These samples are captured from different perspectives and with varied lighting to improve the model's ability to generalize. Key Landmarks: Each video frame may be annotated or processed to extract key landmarks of the hand (e.g., positions of fingers and joints) using tools like MediaPipe, enabling feature extraction for deep learning. Labels: Each video is labeled with the corresponding word or phrase in ISL, forming the target variable for supervised learning. Features and Variability Gesture Diversity: The dataset covers a range of signs, including those for common nouns, verbs, and everyday expressions. Multiple Signers: To enhance the model's robustness, the dataset often includes recordings from multiple individuals with different hand shapes, signing speeds, and accents in movement. Temporal Information: Each video is processed to maintain the temporal flow of gestures, which is essential for LSTM networks to capture sequential dependencies. Preprocessing and Augmentation To prepare the dataset for training: Frame Extraction: Video clips are split into frames to create a sequence input for the LSTM model. Landmark Detection: Tools like MediaPipe detect and extract landmarks for each frame, converting video data into structured numerical information. Normalization and Augmentation: The dataset may undergo normalization for scale consistency and data augmentation, such as flipping or rotating frames, to increase variability and improve the model's resilience to noise. Dataset Challenges Complex Hand Movements: Sign languages, including ISL, involve intricate and simultaneous hand motions that require the model to detect fine-grained details. Background Variability: Ensuring consistent backgrounds or handling various backgrounds in training is crucial for model accuracy. Lighting Conditions: The dataset often includes different lighting settings to train the model to adapt to real-world scenarios. The ISL dataset forms the backbone for training the LSTM-driven deep learning model, ensuring that it learns from comprehensive and diverse examples, which contributes to higher recognition accuracy and robust performance in real-world applications.

Files

Categories

Computer in Society, First Language Use in Language Learning, Language

Licence