Button to scroll to the top of the page.

Events

Final Defense: Lixiang Xu
Friday, June 17, 2022, 09:00am

Lixiang Xu, UT-Austin

"Understanding speech processing & distraction in brains using computational modeling and fMRI"

Abstract: Because of the rapid development of brain imaging technology and computational methods, research on human brain functions draw more attention. Previous work reported the acoustic and semantic representations of speech in human brains. We focused on understanding distraction in natural speech in human brains using computational modeling and fMRI, and then explored a method to improve computational model performance with limited datasets.

To investigate distraction in natural speech, we designed online behavioral experiments to simulate a natural scenario, in which people were attending to someone’s speech and distracted by external sounds. We collected behavioral data from hundreds of participants and used statistics analysis and linear regression models to show that speech distractors were more distracting than non-speech distractors and the distractors from anterior or posterior locations were more distracting than from left or right. Then to study how those distractors and the information from attended speech was represented in human brains, we collected fMRI data from 4 participants under the similar scenario and then fit linear regression models to predict brain responses using acoustic and linguistic features of the speech and distractors. Variance partitioning showed that speech sounds were best encoded in primary auditory, lateral superior temporal gyrus and inferior frontal speech areas while distractor sounds were mostly encoded in posterior auditory cortex, and the spatial locations of distractors were encoded contralaterally across left and right hemisphere. Then logistic regression models were used to decode distractor onset time, duration, and spatial location, all of which could be decoded with high accuracy.

Finally, we developed a method to improve computational model performance with limited datasets called sparse experimental design in which each subject was exposed to a different subset of experimental conditions, instead of the full set of conditions. These sparse data were then used to fit a shared response model, which was in turn used to interpolate responses on the missing conditions for each subject. Then the concatenation of recorded data and interpolated data was used to train computational models. With simulation we found that sparse experiments outperformed dense experiments if the features were sparsely distributed across conditions. And the optimal way to assign experimental conditions to subjects was using a random bipartite graph with minimum communicability and small standard deviation of node degrees.

Overall, we investigated distraction in speech processing and developed a tool to better understand speech processing in human brains. 

Location: PMA 5.114 and Zoom