Emotion recognition from posed and spontaneous dynamic expressions: Human observers versus machine analysis


The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification and may perform just as well as humans when tested in a cross-corpora context.

Shushi Namba
Shushi Namba
Associate Professor

My research interests include distributed facial expression,computational modeling and programmable matter.