Technical Program

Paper Detail

Paper: PS-2B.4
Session: Poster Session 2B
Location: H Fl├Ąche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: The Notorious Difficulty of Comparing Human and Machine Perception
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1295-0
Authors: Judy Borowski, Christina M. Funke, Karolina Stosio, Wieland Brendel, Thomas S. A. Wallis, Matthias Bethge, University of Tuebingen, Germany
Abstract: With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed towards comparing information processing in humans and machines. These works have the potential to deepen our understanding of the inner mechanisms of human perception and to improve machine learning. Drawing robust conclusions from comparison studies, however, turns out to be difficult. Here, we present three case studies to highlight common shortcomings that can easily lead to fragile conclusions. These pitfalls include sub-optimal training procedures or architectures that lead to premature claims regarding gaps between human and machine performance, unequal testing procedures that lead to different decision behaviours, and finally human-centred interpretation of results. Addressing these shortcomings alters the conclusions of previous studies. We show that neural networks can, in fact, solve same-different tasks, that they do experience a "recognition gap" on minimally recognisable to maximally unrecognisable images, and finally, that despite their ability to solve closed-contour tasks, neural networks use different strategies than humans. To counter these three pitfalls, we provide guidelines on how to compare humans and machines in visual inference tasks.