Colloquium: Using voxel-wise encoding models to map how linguistic information is represented in human cortex

Come join us on Monday, November 20, for a talk by Alex Huth (Computer Science, UT Austin)

Using voxel-wise encoding models to map how linguistic information is represented in human cortex

Mon, November 20, 2017 | CLA 1.302B

3:00 PM – 5:00 PM

Voxel-wise encoding models predict brain responses to stimuli in a two-stage process. First, stimuli are transformed into some feature space. Second, the feature space representation is used to predict responses separately in each voxel. Encoding model performance is then assessed using held-out data. In this talk I will discuss how encoding models using different feature spaces can be compared to determine which linguistic feature space best matches representations of natural speech in the cortex.


Alex is a new assistant professor in the Computer Science & Neuroscience departments at UT. His work is mostly focused on using machine learning and data mining methods to investigate how the brain responds to natural stimuli, such as narrative speech. Before coming to UT, his postdoctoral and graduate work were done in the neuroscience institute at UC Berkeley, where he used fMRI to study how the semantic content of language and visual scenes is represented in human cortex, under professor Jack Gallant. Prior to that, Alex received his BS and MS from Caltech.


173rd Meeting of the Acoustical Society of America and the 8th Forum Acusticum

UTsoundLab had three presentations at the ASA in beautiful Boston: Improving speech recognition in noise through speaking style modifications for native and non-native listeners, Acoustic cues and linguistic experience as factors in regional dialect classification, and Effects of intelligibility on within- and cross-modal sentence recognition memory. Congratulations Kirsten, Steven, and Sandie! And we even ran into a lab alumni, Lauren Franklin, who is working toward her PhD at Brown University!