Investigating the Evaluative Dimensions of a Large Set of Communicative Facial Expressions: A Comparison of Lab-Based and Crowd-Sourced Data Collection


Facial expressions form one of the most important non-verbal communication channels. Although humans are capable of producing a wide range of facial expressions, research in psychology has almost exclusively focused on the so-called basic, emotional expressions (anger, disgust, fear, happy, sad, and surprise). Research into the full range of communicative expressions, however, may be prohibitive due to the large number of stimuli required for testing. Here, we conducted both a lab-based and an online, crowd-sourcing study in which participants rated videos of communicative facial expressions according to 13 evaluative dimensions (arousal, attractiveness, audience, distinctiveness, dominance, dynamics, empathy, familiarity, friendliness, intelligence, masculinity, naturalness, outgoingness, persuasiveness, politeness, predictability, sincerity, and valence). Twenty-seven different facial expressions displayed by 6 actors were selected from the KU Facial-Expression-Database (Shin et al., 2012) as stimuli. For the lab-based experiment, 20 participants rated all 162 (randomized) video stimuli. The crowd-sourced experiment was run on Amazon Mechanical-Turk with 423 participants, selected as to gather a total of 20 ratings per stimulus. Within-group reliability was high for both groups (r_Lab=0.753, r_Mturk=0.677 averaged across 13 dimensions), with valence, arousal, politeness, and dynamics being highly reliable measures (r>0.8), whereas masculinity, predictability, and naturalness where comparatively less reliable (0.3

Author Information
Dilara Derya, Korea University, South Korea
Ahyoung Shin, Korea University, South Korea
Haenah Lee, Korea University, South Korea
Christian Wallraven, Korea University, South Korea

Paper Information
Conference: ACP2015
Stream: Qualitative/Quantitative Research in any other area of Psychology

This paper is part of the ACP2015 Conference Proceedings (View)
Full Paper
View / Download the full paper in a new tab/window

Video Presentation

Posted by amp21