LabPhon 2018

Congratulations to Sandie and Steven whose abstracts, “Sources of enhanced sentence recognition memory for native and non-native listeners” and “The perception and production of /sC/ clusters by Spanish-English sequential bilinguals”, have been accepted for presentations at the 2018 LabPhon! Lisbon, here we come!

http://labphon16.labphon.org/index.html

Advertisements

Quantifying phonetic variation

Come join us for another exciting talk by Jennifer Cole,  on Thursday, February 15th, at 3 pm in JES A217A.

Quantifying phonetic variation

Speech is known to be highly variable across speakers and situations, and listeners pay attention to some of this phonetic detail for the rich contextual information it carries. In this talk I ask how much variability is present in speech, and whether some components of speech are more or less susceptible to variation. I present an approach to quantifying phonetic variation developed in collaboration with Stefanie Shattuck-Hufnagel (MIT), which approaches the question from the dual perspectives of perception and production. We analyze serial imitations of a heard utterance, where the linguistic object to be produced is fixed syntactically, lexically and prosodically, and employ a novel method for quantifying phonetic variation using acoustic landmarks (Stevens 2002) as correlates of phonologically-contrastive manner features. Imitated utterances produced by ten native speakers of American English resulted in 3500+ consonant and vowel landmarks (LMs), which were labelled and compared both to the lexically-specified LMs, and to the LMs produced in the stimulus. Our findings demonstrate and quantify systematicity in phonetic variation as measured in terms of LMs. They also reveal that speakers exercise choice in phonetic implementation, deviating both from lexical targets and from the phonetic detail of the heard stimulus. These results hold promise for the use of imitated speech in the study of phonetic variation, and for the use of LMs (and by extension other feature cues) as a phonologically-grounded measure of variation in speech production.

Colloquium – The Linguistics and Philosophy of Language Acquisition

Join us for a talk by Geoffrey K. Pullum (School of Philosophy, Psychology and Language Acquisition, University of Edinburgh)

The Linguistics and Philosophy of Language Acquisition

Mon, February 12, 2018 | CLA 1.302B

3:00 PM – 5:00 PM

For five decades, discussions of the view known as linguistic nativism have woven together topics in generative linguistics and rationalist philosophy. Syntactic research is claimed to reveal a “universal grammar” that would be unlearnable from linguistic experience and hence provides a compelling case for the existence of “innate ideas.” It is quite unusual for empirical work in the “special sciences” to produce results that reorient traditional philosophical discussions It therefore behooves syntacticians to ensure that their results are both copious and solid. I review some of the load-bearing syntactic results cited in the debate, and argue that they will not support the necessary weight. Foremost among the latter is an argument developed 40 years ago in this department, here at the University of Texas.

Colloquium: Using voxel-wise encoding models to map how linguistic information is represented in human cortex

Come join us on Monday, November 20, for a talk by Alex Huth (Computer Science, UT Austin)

Using voxel-wise encoding models to map how linguistic information is represented in human cortex

Mon, November 20, 2017 | CLA 1.302B

3:00 PM – 5:00 PM

Voxel-wise encoding models predict brain responses to stimuli in a two-stage process. First, stimuli are transformed into some feature space. Second, the feature space representation is used to predict responses separately in each voxel. Encoding model performance is then assessed using held-out data. In this talk I will discuss how encoding models using different feature spaces can be compared to determine which linguistic feature space best matches representations of natural speech in the cortex.

_________________________________

Alex is a new assistant professor in the Computer Science & Neuroscience departments at UT. His work is mostly focused on using machine learning and data mining methods to investigate how the brain responds to natural stimuli, such as narrative speech. Before coming to UT, his postdoctoral and graduate work were done in the neuroscience institute at UC Berkeley, where he used fMRI to study how the semantic content of language and visual scenes is represented in human cortex, under professor Jack Gallant. Prior to that, Alex received his BS and MS from Caltech.