Technical Program

Paper Detail

Paper: PS-1A.34
Session: Poster Session 1A
Location: Symphony/Overture
Session Time: Thursday, September 6, 16:30 - 18:30
Presentation Time:Thursday, September 6, 16:30 - 18:30
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: Deep Predictive Learning in Vision
Manuscript:  Click here to view manuscript
Authors: Randall O'Reilly, John Rohrlich, University of Colorado Boulder, United States
Abstract: How does the neocortex learn and develop the foundations of our high-level cognitive abilities? We present a comprehensive framework spanning biological, computational, and cognitive levels, providing a coherent answer supported by data. Learning is based on making predictions about what the senses will report at 100 msec (alpha frequency) intervals, and adapting synaptic weights to improve prediction. The pulvinar nucleus of the thalamus serves as a projection screen upon which predictions are generated, through deep-layer 6 corticothalamic inputs from multiple brain areas. The bottom-up, sparse, driving inputs from layer 5 intrinsic bursting neurons provide the target signal, and the temporal difference between it and the prediction reverberates throughout cortex, driving synaptic changes that approximate error backpropagation, using only local activation signals in equations derived from a detailed biophysical model. We test this framework of unsupervised predictive learning with a model of the visual system that incorporates two central principles: top-down input from compact, high-level, abstract representations is required for accurate prediction of low-level sensory inputs; and the collective, low-level prediction error is progressively partitioned to enable extraction of separable factors that drive learning of high-level abstractions. Our model self-organized invariant object representations of 100 objects from simple movies and accounts for a wide range of data.