Five questions answered by Yoshua Bengio

1 How can we find out how the brain works?

If there is a compact description of the computational principles which explain how the brain manages to provide us with our intelligence, this is something I would consider the core explanation for how the brain works – a little bit like the laws of physics for our physical world. Note that this is very different from the structured observation of our world in all its encyclopedic detail, which provides a useful map of our world, but not a principled explanation. Just replace ‘world’ by ‘brain’. My thesis is that those principles would also allow us to build intelligent machines and that at the heart of our intelligence is our ability to learn and make sense of the world, by observing it and interacting with it. That is why I believe in the importance of a continuous discussion between the brain researchers and AI researchers, especially those in machine learning – particularly deep learning and neural networks. This is likely to benefit AI research as well, as it has in the past.

2 What will your talk at CCN 2017 be about?

I will start by discussing the recent progress in deep learning research, focusing on the inspiration from neuroscience and cognition. I will also talk about novel work, which aims at bridging the gap between back-propagation, the workhorse of deep learning, and neuroscience, as well as about unsupervised reinforcement learning approaches on the cognitive side, which may help to formalize in a machine learning framework how a learner could discover the notions of attributes and objects (as independently controllable aspects of the environment).

3 How can cognitive science, computational neuroscience, and artificial intelligence best work together?

By learning about each other’s developments, collaborating, as usual!

4 What current developments are you most excited about?

It is becoming clear to me that it is very plausible that the brain developed a learning strategy analogous to back-propagation in order to estimate gradients of a training objective, thus addressing the very central question of credit assignment, which is also at the heart of the success of deep learning, in which many areas (layers in a deep neural network) are jointly trained in a coordinated way towards a common objective. Many questions remain open in the quest to bridge this gap between backprop and neuroscience, and we are approaching the time when the questions will be sufficiently precise to be investigated by experimentalists.
I am also very excited about recent developments in machine learning that connect back to older questions raised by classical AI and cognitive science regarding higher-level cognitive notions such as objects, agents, reasoning, memory and knowledge representation, bringing together the expertise in deep learning and in reinforcement learning. There is here again an opportunity for fruitful multi-disciplinary investigations, which could lead us towards a better understanding of cognition – going beyond the current success of deep learning for perception tasks.

5 What do you hope to learn at CCN 2017?

I am hoping to learn about new developments and views in cognitive computational neuroscience, since I do not follow that literature, and as I wrote above, I believe that the potential for positive interactions is quite high.

No comments yet.

Leave a Reply