(1) How can we find out how the brain works?
I am primarily interested in understanding human intelligence, and from that point of view, I am worried about the massive move towards rodent models in neuroscience. While the abundance of available techniques makes this move attractive, there is not much evidence that rodents plan strategically, reflect on their own decisions, infer the mental states of others, weigh the value of information against costs, etc. One way forward would be to demonstrate these abilities in rodents. An alternative would be to become less obsessed by techniques.
At a more sociological level, I am old-fashioned and strongly believe in small, hypothesis-driven science. While some problems in neuroscience might be best addressed using big data, big simulations, or big collaborations, my sense is that those currently involve more hype than substance. Neuroscience and cognitive science have come far with a “letting a hundred flowers bloom” approach, and there is no evidence that this approach is bankrupt. More specifically in computational neuroscience, small science often amounts to a search for evolutionarily meaningful organizing principles, perhaps initially in a toy model – this is my favorite approach.
(2) What will your talk at CCN 2017 be about?
It will be a tutorial on mathematical modeling of behavior. I will try to categorize such models, dig a bit deeper into Bayesian models, and spend a good amount of time on best practices for model fitting and model comparison. I am very excited that CCN offers tutorials and takes them seriously; perhaps in future editions, we can add more time for doing exercises?
(3) How can cognitive science, computational neuroscience, and artificial intelligence best work together?
One missing piece in the field is the psychological plausibility of neural network models of cortex. There is a lot of attention for biological plausibility, and for linking to neural data. However, attempts to link to behavioral data typically stay extremely crude, of the type “humans can do the task and the network can do the task”, or barely better, “the network approaches human accuracy on this task”. We should use more tools from basic psychophysics to interrogate and challenge neural networks: subject a trained network to a psychophysical experiment in which stimuli are parametrically varied, and compare its psychometric curves quantitatively to human data. Bonus: compare the same behavioral-level models on the network output and compare model goodness-of-fit ranking to that obtained from human data. Emphasizing psychological plausibility of networks could help to unify the fields represented at CCN.
(4) What current developments are you most excited about?
Connections between behavioral economics, reinforcement learning, and inference: e.g. information gathering as an economic problem, keeping track of uncertainty in RL, and thinking ahead in sequential decision tasks with large decision trees.
(5) What do you hope to learn at CCN 2017?
I am hoping to find a new conference home, after years of mild frustration that Cosyne has too little interest in behavior and cognition, and that VSS has too little computational modeling and machine learning.