Technical Program

Paper Detail

Paper: PS-2A.24
Session: Poster Session 2A
Location: Symphony/Overture
Session Time: Friday, September 7, 17:15 - 19:15
Presentation Time:Friday, September 7, 17:15 - 19:15
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: The role of textural statistics vs. outer contours in deep CNN and neural responses to objects
Manuscript:  Click here to view manuscript
Authors: Bria Long, Stanford University, United States; Talia Konkle, Harvard University, United States
Abstract: Deep convolutional neural networks (CNNs) are providing new insight into the high-dimensional feature space that supports object representations in the ventral stream. Here, we examined what specific visual features underlie deep CNN’s ability to predict occipitotemporal cortex responses to images of animals and objects of different sizes. To do so, we measured activations from a widely-used convolutional neural network (Krizhevsky et al., 2012) to four variants of the same image set: (i) original images, (ii) silhouetted images, (iii) phase-scrambled images, and (iv) texforms images (which preserve a combination of texture and coarse form; Long, Yu, & Konkle, 2017). We found that the predictive power of CNN features in the ventral stream was better accounted for by textural rather than outer contour properties. These results point towards textural statistics as an important dimension in characterizing the representational layout of object representations in object selective cortex, and underscore the importance of controlled image sets for examining when and why deep CNN features hold predictive power.