ALS Disease patient classification

Published: 21 February 2025| Version 2 | DOI: 10.17632/fbhc38zzm9.2
Contributor:
Juan Pablo Hernandez

Description

The ALS voice dataset was collected at the Research and Clinical Center of Neurology and Neurosurgery (USC) and is designed to facilitate the study of speech impairments in patients with Amyotrophic Lateral Sclerosis (ALS). The dataset is comprised of 148 sustained vowel phonations, which include both pathological (ALS-affected) and healthy control (HC) voice samples. These phonations are critical in understanding speech degradation in ALS patients, as the disease affects motor neurons responsible for speech production. The collection of this data provides a valuable resource for machine learning models, acoustic analysis, and medical diagnostics related to ALS. Dataset Composition The dataset is balanced, containing 52% pathological voices (ALS patients) and 51% healthy voices (HCs). Each individual contributing to the dataset was asked to produce sustained phonation of the vowels /a/ and /i/ at a comfortable pitch and loudness, maintaining their voice as steadily and as long as possible. These vowels were specifically chosen as they provide significant phonetic markers in evaluating voice stability, intensity, and frequency variations, which can be affected by neuromuscular degeneration in ALS. Challenges and Considerations While the dataset provides valuable insights, there are some challenges and limitations: • Device Variation: Since recordings were collected using different smartphones and headsets, there may be variations in audio quality and background noise. • Limited Data Size: While 148 phonations provide a useful starting point, larger datasets are needed for more robust AI training and statistical analysis. • Single Vowel Focus: The dataset focuses only on the vowels /a/ and /i/, which are useful but do not capture the full range of speech impairments seen in ALS. • Speaker Variability: Individual differences in voice quality, pitch, and speaking habits may introduce variability, requiring careful normalization techniques during analysis. Future Directions Expanding and improving this dataset could lead to significant advancements in ALS diagnosis and speech therapy. Future efforts could include: • Increasing the number of participants to enhance statistical power • Expanding the phonetic range to include consonants and connected speech samples • Using standardized recording devices to minimize technical variability • Longitudinal data collection to track speech changes over time in ALS patients This dataset represents an important step forward in ALS research, providing researchers with valuable voice recordings to study neurological speech disorders. With applications in AI, clinical diagnostics, and remote monitoring, this dataset has the potential to improve early detection, patient care, and disease progression tracking for ALS.

Files

Steps to reproduce

Significance of the Dataset This dataset is critical for research on neurological disorders affecting speech production, particularly ALS. ALS is a progressive neurodegenerative disease that affects motor neurons in the brain and spinal cord, leading to muscle weakness and eventual loss of voluntary movements, including speech and breathing. Many ALS patients experience dysarthria, a speech disorder characterized by slurred, weak, or effortful speech due to impaired control of the vocal tract muscles. By analyzing sustained vowel phonations, researchers can identify: • Acoustic biomarkers of ALS progression • Changes in speech articulation and phonation stability • Early signs of bulbar dysfunction, which affects speech and swallowing Applications of the Dataset This dataset has several applications in clinical and computational research, including: 1. Speech Analysis for ALS Diagnosis and Monitoring • Tracking the progression of speech degradation in ALS patients over time • Developing objective metrics for ALS diagnosis based on voice characteristics 2. Machine Learning and AI Applications • Training deep learning models to classify ALS vs. healthy speech • Developing AI-based diagnostic tools that assist clinicians in early ALS detection 3. Telemedicine and Remote Monitoring • Enabling ALS patients to monitor their condition from home via smartphone-based voice recordings • Providing remote assessments for individuals in areas with limited access to neurological specialists 4. Phonetic and Acoustic Research • Studying how ALS affects voice production, including fundamental frequency shifts, voice tremors, and phonation instability • Comparing ALS-related speech changes with those seen in other neuromuscular disorders like Parkinson’s disease or Multiple Sclerosis Recording Methodology The phonations were recorded using various smartphones with standard headsets, ensuring an accessible and user-friendly data collection process. The sampling rate used for all recordings was 44.1 kHz, and all files were saved in 16-bit uncompressed PCM format, which ensures high audio quality without loss of signal fidelity. These high-quality recordings allow for detailed analysis of speech features, such as vocal tremor, frequency modulation, jitter, and shimmer, which are crucial in detecting ALS-related speech impairment. The average duration of recorded phonations varies slightly between the ALS and HC groups: • HC group: 3.7 ± 1.5 seconds • ALS group: 4.1 ± 2.0 seconds This duration variation can provide insights into voice endurance, phonation time, and fatigue effects in ALS patients compared to healthy controls.

Institutions

University of South Carolina

Categories

Stem Cell, Neurodegenerative Disorder, Neurochemistry, Voice Disorder, Researcher, Neurotransmitter and Neuroreceptors, Neuro-Regeneration Neurosurgery, Biomedical Research

Licence