Technical Program

Paper Detail

Paper: PS-2A.7
Session: Poster Session 2A
Location: Symphony/Overture
Session Time: Friday, September 7, 17:15 - 19:15
Presentation Time:Friday, September 7, 17:15 - 19:15
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: Inverse POMDP: Inferring Internal Model and Latent Beliefs
Manuscript:  Click here to view manuscript
DOI: https://doi.org/10.32470/CCN.2018.1213-0
Authors: Zhengwei Wu, Baylor College of Medicine, United States; Paul Schrater, University of Minnesota, United States; Xaq Pitkow, Rice University, United States
Abstract: Complex behaviors are often driven by an internal model, which may reflect memories, beliefs, motivation, or arousal. Inferring the internal model is a crucial ingredient for understanding how the brain generates behaviors and interpreting neural activities of agents. Here we describe a method to infer an agent's internal model and dynamic beliefs, and apply it to a simulated agent performing a foraging task. Assuming rationality of animals, we model the behaviors of the animals as a Partially Observable Markov Decision Process (POMDP). Given the agent's sensory observations and actions, we learn its internal model by maximum likelihood estimation over a set of task-relevant parameters. The Markov property of the POMDP enables us to characterize the transition probabilities between internal states and iteratively estimate the agent's policy using a constrained Expectation-Maximization algorithm. We validate our method on simulated agents performing suboptimally on a foraging task, and successfully recover the agent's actual model.