Language is a fundamental
human capacity. Aspects of language
are crucial to our social interactions, communication, and quality of life, and are linked to other higher level cognitive processes.
Understanding spoken language poses an incredible computational challenge for the brain’s auditory and language systems. While most listeners carry out this process with ease, the numerous ways in which language processing breaks down underscores its complexity.
My work examines multiple issues in speech perception and language learning. To address these issues, I use a variety of methodologies, including eye-tracking, scalp and intracranial electroencephalography (EEG and iEEG).
I also apply machine learning techniques to decode speech information directly from neurophysiological data.
My hope is that by better characterizing
the neural and cognitive mechanisms
that subserve spoken language
processing, we can better inform
treatments in the classroom and the clinic.