This Challenge Interspeech 2026 is a shared task aimed at advancing automatic assessment of Modern Standard Arabic (MSA) pronunciation by leveraging computational methods to detect and diagnose pronunciation errors. The focus on MSA provides a standardized and well-defined context for evaluating Arabic pronunciation.
Participants will develop systems capable of detecting mispronunciations (e.g., substitution, deletion, or insertion of phonemes).
Design a model to detect and provide detailed feedback on mispronunciations in MSA speech. Users read vowelized sentences; the model predicts the spoken phoneme sequence and flags deviations. Evaluation is on the MSA-Test dataset with human‐annotated errors.
Figure: Overview of the Mispronunciation Detection Workflow
System shows a Reference Sentence plus its Reference Phoneme Sequence.
Example:
< y a t a H a d d a v u n n aa s u l l u g h a t a l E a r a b i y y a t a
User speaks; system captures and stores the audio waveform.
Model predicts the phoneme sequence—deviations from reference indicate mispronunciations.
Example of Mispronunciation:
< y a t a H a d d a v u n n aa s u l l u g h a t a l E a r a b i y y a t a< y a t a H a d d a s u n n aa s u l l u g h a t u l E a r a b i y y a t a
Here, v→s (substitution) represents a common pronunciation error.
The phoneme set used in this work is based on a specialized phonetizer developed for vowelized MSA. It includes a comprehensive range of phonemes designed to capture key phonetic and prosodic features of standard Arabic speech, such as stress, pausing, intonation, emphaticness, and notably, gemination. Gemination—the doubling of consonant sounds—is explicitly represented by duplicating the consonant symbol (e.g., /b/ becomes /bb/).
This phoneme set provides a detailed yet practical representation of the speech sounds relevant for accurate mispronunciation detection in MSA.
For further details, including the full phoneme inventory, see Phoneme Inventory.
Hosted on Hugging Face:
Columns:
audio: waveformsentence: original text (sentence)index: sentence IDtashkeel_sentence: fully diacritized text (sentence)phoneme: phoneme sequence (using phonetizer)Auxiliary high-quality TTS corpus for augmentation: (Will be released on 15 December 2025)
98 sentences × 18 speakers ≈ 2 h, with deliberate errors and human annotations.
load_dataset("Interspeech26/MSA_Test_v2")
Submit a UTF-8 CSV named teamID_submission.csv with two columns:
ID,Labels
0000_0001, y a t a H a d d a ...
0000_0002, m a a n a n s a ...
...
Note: no extra spaces, single CSV, no archives.
The Leaderboard is based on phoneme-level F1-score. We use a hierarchical evaluation (detection + diagnostic) per MDD Overview.
From these we compute:
Rates:
Plus standard Precision, Recall, F1 for detection:
Teams and individual participants must register to gain access to the test set. Please complete the registration form using the link below:
Registration opens on December 1, 2025.
Further details on the open-set leaderboard submission will be posted on the shared task website (December 15, 2025). Stay tuned!
For inquiries and support, reach out to the task coordinators.