SEDTalker

Emotion-Aware 3D Facial Animation Using Frame-Level Speech Emotion Diarization

1University of Alberta, Multimedia Research Center (MRC), Canada 2University of Florence, Media Integration and Communication Center (MICC), Italy
International Conference on Pattern Recognition (ICPR 2026)

Speech Emotion Diarization

File: SED/wav/test.wav

Duration: 21.43 seconds

A
H
H
H
A
A
H
H
A
A
A
H
H
A
A
A
A
A
Anger (High)
Anger (Medium)
Anger (Low)
Happy (High)
Happy (Medium)
Happy (Low)

Anger (A)

Duration: 14.38s (67.1%)

Count: 11 segments

Avg Intensity: 70.8%

Happy (H)

Duration: 6.98s (32.6%)

Count: 7 segments

Avg Intensity: 71.8%

High Intensity

Duration: 12.1s (56.5%)

Segments: 8

Medium Intensity

Duration: 3.92s (18.3%)

Segments: 4

Low Intensity

Duration: 5.34s (24.9%)

Segments: 6

Abstract

We introduce SEDTalker, an emotion-aware framework for speech-driven 3D facial animation that leverages frame-level speech emotion diarization to achieve fine-grained expressive control. Unlike prior approaches that rely on utterance-level or manually specified emotion labels, our method predicts temporally dense emotion categories and intensities directly from speech, enabling continuous modulation of facial expressions over time. The diarized emotion signals are encoded as learned embeddings and used to condition a speech-driven 3D animation model based on a hybrid Transformer-Mamba architecture. This design allows effective disentanglement of linguistic content and emotional style while preserving identity and temporal coherence. We evaluate our approach on a large-scale multi-corpus dataset for speech emotion diarization and on the EmoVOCA dataset for emotional 3D facial animation. Quantitative results demonstrate strong frame-level emotion recognition performance and low geometric and temporal reconstruction errors, while qualitative results show smooth emotion transitions and consistent expression control. These findings highlight the effectiveness of frame-level emotion diarization for expressive and controllable 3D talking head generation.

Emotion-Specific Examples

BibTeX

@inproceedings{sedtalker2026jafari,
  title     = {SEDTalker: Emotion-Aware 3D Facial Animation Using Frame-Level Speech Emotion Diarization},
  author    = {Farzaneh Jafari and Stefano Berretti and Anup Basu},
  booktitle = {International Conference on Pattern Recognition (ICPR)},
  year      = {2026}
}