This year's installment of the Algonauts Project Challenge — How the Human Brain Makes Sense of a World in Motion is closed. In this session we will introduce, discuss and evaluate the outcome of the challenge. We will first give a short introduction to the context and gist of the challenge and summarize the outcome. Second, we provide a hands-on tutorial based on the development kit to show the nuts and bolts of the challenge. In a third part the top performing teams will present their solutions in short talks. We close with the opportunity of a common discussion and the announcement of the Algonauts Project Challenge 2022.
[All questions were answered live] Q-1: Was the 100 of dimension choice arbitrary here? Q-2: why you have chosen ResNet18 ? Q-3: Can you elaborate a bit more on the preprocessing? Q-4: Q for Robert: Was there any area that care a bit more about the sampling rate of input videos? Q-5: Thank you for the great talk! Simple question, when using pretrained model from public dataset, have you also tried embeddings from Moments In Time dataset, which seems to be similar with the dataset provided by Algonauts? Q-6: Interesting that multi subject training is best. Can you explain the intuition for this?