I am interested in how the brain integrates high-level information (for example, semantic expectation or lexical status) with low-level acoustic cues (such as voice-onset time or coarticulatory information) during on-line speech perception, and also how this process changes throughout word learning (in both L1 and L2).


To answer these questions, I use a variety of methodologies, such as eye-tracking, electroencephalography (EEG), and electrocorticography (ECoG).  I also utilize machine-learning algorithms in order to decode speech information directly from neurophysiological data.

My hope is that by better understanding speech perception in a typically-developing, normal-hearing brain, we may be able to better inform clinical treatments and diagnostics for those with language difficulties and hearing loss.

Last update • 2020 | 10 | 19