I am interested in spoken language processing in the brain, specifically in how the brain integrates high-level information (for example, semantic expectations or lexical status) with low-level acoustics (such as voice-onset time or coarticulatory information), and also how these cortical and cognitive mechanisms change throughout word learning and second language acquisition.


To answer these questions, I use a variety of methodologies, such as eye-tracking, electroencephalography (EEG), and electrocorticography (ECoG).  I also utilize machine-learning algorithms in order to decode speech information directly from neurophysiological data.

My hope is that by better understanding speech perception in the "typical" brain, we may be able to better inform practices in the classroom (for language learners) and the clinic (for those with language difficult and hearing loss).

Last update • 2021 | 03 | 03