Technical Program

Paper Detail

Paper: PS-2B.54
Session: Poster Session 2B
Location: H Fl├Ąche 1.OG
Session Time: Sunday, September 15, 17:15 - 20:15
Presentation Time:Sunday, September 15, 17:15 - 20:15
Presentation: Poster
Publication: 2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
Paper Title: The relational processing limits of classic and contemporary neural network models of language processing
Manuscript:  Click here to view manuscript
License: Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI: https://doi.org/10.32470/CCN.2019.1022-0
Authors: Guillermo Puebla, University of Edinburgh, United Kingdom; Andrea Martin, Max Planck Institute for Psycholinguistics, Netherlands; Leonidas Doumas, University of Edinburgh, United Kingdom
Abstract: The ability of neural networks to perform relational reasoning is a matter of long-standing controversy. Recently, some researchers have argued that (1) classic PDP models can learn relational structure and (2) the successes of deep learning suggest that structured representations are unnecessary to explain human language. In this study we tested a classic PDP model and a contemporary deep learning model for text processing. Both models were trained to answer questions about stories based on the thematic roles that several concepts played on the stories. In three critical test we varied the statistical structure of new stories while keeping their relational structure intact with respect to the training data. Both models performed poorly in our tests. These results cast doubts on the suitability of traditional neural networks for explaining phenomena based on relational reasoning.