Using voxel-wise encoding models to map how linguistic information is represented in human cortex
Mon, November 20, 2017 | CLA 1.302B
3:00 PM – 5:00 PM
Voxel-wise encoding models predict brain responses to stimuli in a two-stage process. First, stimuli are transformed into some feature space. Second, the feature space representation is used to predict responses separately in each voxel. Encoding model performance is then assessed using held-out data. In this talk I will discuss how encoding models using different feature spaces can be compared to determine which linguistic feature space best matches representations of natural speech in the cortex.
Alex is a new assistant professor in the Computer Science & Neuroscience departments at UT. His work is mostly focused on using machine learning and data mining methods to investigate how the brain responds to natural stimuli, such as narrative speech. Before coming to UT, his postdoctoral and graduate work were done in the neuroscience institute at UC Berkeley, where he used fMRI to study how the semantic content of language and visual scenes is represented in human cortex, under professor Jack Gallant. Prior to that, Alex received his BS and MS from Caltech.