I am interested in

spoken language processing

in the brain, specifically in how the brain integrates high-level information (for example, sentence contexts or lexical status) with low-level acoustics (such as voice onset time or coarticulatory information), and also how these mechanisms change throughout word learning and second language acquisition.

 

To answer these questions, I use a variety of methodologies, such as eye-tracking, electroencephalography (EEG), and electrocorticography (ECoG).  I also utilize machine-learning algorithms in order to decode speech information directly from neurophysiological data.

My hope is that by better understanding speech processing in the "typical" brain, we can better inform practices in the classroom (for language learners) and the clinic (for those with language difficulties and hearing loss).

Last update • 14 | 04 | 2021