Technical Program

Paper Detail

Paper: PS-1B.54
Session: Poster Session 1B
Location: H Fl├Ąche 1.OG
Session Time: Saturday, September 14, 16:30 - 19:30
Presentation Time:Saturday, September 14, 16:30 - 19:30
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: Geometry of Shared Representations
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1418-0
Authors: Gregorty Henselman-Petrusek, Simon Segert, Princeton Universeity, United States; Bryn Keller, Mariano Tepper, Intel Labs, United States; Jon Cohen, Princeton Universeity, United States
Abstract: Advances in the use of neural networks in both cognitive neuroscience and machine learning have generated new challenges: while they have proven powerful at learning complex tasks, \emph{what} they learn and \emph{how} they come to perform those tasks often remains a mystery. Here, we examine a novel approach to these challenges, inspired by recent spatial and algebraic analyses of abstraction and generalization in network architectures. We evaluate it, and compare it to other measures, by using it to test theoretical predictions regarding the influence that training has on the development of shared vs. separated representations, and their impact on network performance. We find that the proposed measure outperforms all others in identifying a theoretically predicted, low dimensional set of linear spatial relationships that, in turn, best predict network performance.