Technical Program

Paper Detail

Paper: PS-2A.44
Session: Poster Session 2A
Location: Symphony/Overture
Session Time: Friday, September 7, 17:15 - 19:15
Presentation Time:Friday, September 7, 17:15 - 19:15
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: Deep neural networks trained with heavier data augmentation learn features closer to representations in hIT
Manuscript:  Click here to view manuscript
Authors: Alex Hernández-García, University of Osnabrück, Germany; Johannes Mehrer, University of Cambridge, United Kingdom; Nikolaus Kriegeskorte, University of CambridgeColumbia University, United States; Peter König, University of Osnabrück, Germany; Tim C. Kietzmann, University of Cambridge, United Kingdom
Abstract: Modern artificial neural networks have been shown to learn representations comparable to the human visual cortex. However, the degree of representational similarity greatly differs between network architectures, training data sets and other factors. Understanding what makes a deep neural network learn representations closer to the human brain and subsequently developing models that reduce the gap helps computational neuroscientists investigate the underlying mechanisms that shape neural representations. Furthermore, understanding information processing in the brain paves the way for better artificial intelligence algorithms, as human vision is known to be highly robust. In this work, we investigate the relationship between augmentation of training data and the representational similarity of convolutional neural networks with high-level visual representations in human inferior temporal cortex. Our results suggest that networks trained with heavier augmentation yield representations that are more similar between deep neural networks and the brain.