Technical Program

Paper Detail

Paper: PS-1A.2
Session: Poster Session 1A
Location: Symphony/Overture
Session Time: Thursday, September 6, 16:30 - 18:30
Presentation Time:Thursday, September 6, 16:30 - 18:30
Presentation: Poster
Paper Title: From prediction to modification - modeling first impression of faces
Manuscript:  Click here to view manuscript
Authors: Amanda Song, Chad Atalla, Li Linjie, Bartholomew Tam, Garrison Cottrell, University of California, San Diego, United States
Abstract: Humans make complex inferences on faces, ranging from objective properties (gender, ethnicity, expression, age, identity, etc) to subjective judgments (facial attractiveness, trustworthiness, sociability, friendliness, etc). While the objective aspects of face perception have been extensively studied, fewer computational models have been developed for the social impressions of faces. Bridging this gap, we develop a method to predict human impressions of faces in 40 social dimensions, using deep representations from state-of-the-art neural networks. We find that model performance grows as the human consensus on a face trait increases. This illustrates the learnability of subjective social perception of faces, especially when there is high human consensus. To verify the generalization ability, we apply the model on a large dataset, CelebA, and empirically verify the quality of model predictions. To further probe what makes a face salient in certain traits, we develop ModifAE: a novel standalone autoencoding neural network that can learn to make continuous modifications on multiple traits. We train ModifAE to modify continuous first-impression face traits, from our predicted dataset, and empirically show that this modification network produces convincing modifications, demonstrating the accuracy of the predictive model. Both the prediction and modification networks have wide applications in real life.