CheoFaMo: A dataset of segmented image sequences for in-depth mood analysis in Vietnamese traditional Chèo

Published: 10 April 2026| Version 2 | DOI: 10.17632/2h6hxwkbwn.2
Contributors:
, Minh Kien Nguyen, Vu Thanh Long, Nguyen Tien Phuc, Hoang Quoc Viet, Le Thanh Ha, Dinh Quang Trung

Description

The CheoFaMo dataset is a segmented image sequence resource designed for computational mood recognition in Vietnamese traditional Chèo, consisting of 5,844 labeled segments extracted from seven plays. The dataset captures challenging conditions such as theatrical makeup, dynamic lighting, and motion, making it a suitable benchmark for deep learning models in facial expression and mood analysis. Its integration with the CheoGoogle Ontology further enables structured applications in emotion recognition, psychological study, and educational tools for digitizing the art of Chèo

Files

Steps to reproduce

It was constructed through a three-phase pipeline: raw video recordings were processed into onset–apex–offset (O-A-O) facial expression sequences, facial tracking isolated these frames while preserving character continuity, and expert annotators labeled six mood categories (excluding Neutral).

Institutions

  • Vietnam National University University of Engineering and Technology
    Hanoi

Categories

Computer Vision, Expression Analysis

Licence