The purpose of this research study is to understand how speech and language are processed in the brain. This study will provide information that may help with the understanding how speech and language are processed in children and whether there may be differences between children who stutter and children who do not stutter. This project will evaluate these neural processes for speech signals in children who stutter and control subjects through a battery of behavioral speech and language tests, electroencephalography-based (EEG) tasks, functional magnetic resonance imaging (fMRI), and computational modeling.
The study will evaluate the integrity of neural processes underlying speech sound encoding and the ways in which these processes are modulated by task demands using neuroimaging and computational modeling. Age-appropriate standardized tests for assessing speech, language, and cognitive skills will be administered by a certified speech language pathologist or trained lab member. The investigators will also measure electroencephalography (EEG) via frequency following responses (FFRs) and temporal response functions (TRFs) while children complete speech-sound tasks of varying difficulty including syllable listening and identification and continuous speech narrative comprehension tasks. Both tasks will be presented in both quiet and in background noise. EEG signals will be collected using Ag-AgCl scalp electrodes, and responses will be recorded at a sampling rate of 25 kHz using Brain Vision Recorder (Brain Products, Gilching, Germany). The investigators will also leverage functional MRI (fMRI) to assess multiple neural systems underlying speech sound processing in children who stutter in a 3T scanner. Employing similar speech-sound tasks in the same participants as the EEG tasks will allow for quantifying neural activations and representations in auditory, speech motor articulatory, and attention networks during simple and complex speech tasks. A series of MRI scans will be recorded to provide data regarding the participant's brain anatomy. These scans will be analyzed on their own and also used in combination with functional scans. All participants will be screened for metal and other objects that are not appropriate for the MRI scanner room. Participants will be given earplugs and/or headphones to wear.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
Behavioral-, electrophysiological-, and magnetic resonance imaging-based speech sound testing
University of Michigan
Ann Arbor, Michigan, United States
NOT_YET_RECRUITINGUniversity of Pittsburgh
Pittsburgh, Pennsylvania, United States
RECRUITINGSpeech Sound Identification
Behavioral responses will be measured for the syllable identification task in quiet and in the presence of background noise. Children will respond as quickly as possible to identify which speech sound they heard. Within and between group analyses will be conducted between children who stutter and control subjects. Drift diffusion models (DDMs) will be used to aggregate the behavioral responses of accuracy and reaction time to evaluate bias toward more accurate or faster responses as well as change in response behaviors over time in each group.
Time frame: 1 Session (up to 2 hours)
Frequency Following Responses (EEG)
Frequency following responses (FFRs) will be collected and used to quantify neural encoding of fast temporal cues in auditory stimuli, including speech sounds. FFRs (70-1500 Hz) will be elicited by syllables. FFRs will be elicited in quiet conditions and in the presence of a competing background story. FFRs will be measured for magnitude. Decoding of FFRs elicited by syllables using support vector machine classifiers will be analyzed. Within and between group analyses will be conducted between children who stutter and control subjects.
Time frame: 1 Session (up to 30 minutes)
Temporal Response Functions (EEG)
Temporal response function (TRFs) analysis directly compares a continuously varying stimulus, such as continuous speech, to EEG data. The relationship between the continuous speech and EEG signals will be estimated as a continuous wave describing how a change in a continuous speech feature relates to changes in the EEG signal. The EEG data predicted by the TRF are compared to the real, observed EEG data via correlation, resulting in a measure of fitness (Pearson's r) for how well the stimulus explains the observed neural activity. Multivariate linear ridge regression using leave-one-out-cross validation method, to prevent over-fitting the data, will be utilized to compare the predicted and obtained EEG. Higher correlations between the predicted and obtained EEG reflect better cortical encoding of the speech envelope. Within and between group analyses will be conducted between children who stutter and control subjects.
Time frame: 1 Session (up to 1 hour)
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.
BASIC_SCIENCE
Masking
NONE
Enrollment
600
Blood-oxygen level dependent activation (functional magnetic resonance imaging)
Brain activation patterns indexed by blood-oxygen level dependent (BOLD) fMRI signals will be analyzed. BOLD responses will be estimated separately for each participant for each functional task. Study-level outcomes include main effects of group (children who stutter vs. controls), group by region interactions, and group by network (auditory, speech motor, and attention) interactions. Drift diffusion models (DDMs) will be used to aggregate the behavioral responses of accuracy and reaction time (e.g. during the categorization the sounds such as /ba/ or /da/) to evaluate bias toward more accurate or faster responses as well as change in response behaviors over time in each group.
Time frame: 1 Session (up to 2 hours)
Multi-voxel pattern analysis (functional magnetic resonance imaging)
Multi-voxel pattern analysis (MVPA) is a machine learning analysis technique that aims to quantify spatially distributed neural representations across ensembles of voxels. MVPA will be used to determine the neural activity patterns that contain predictive information about the syllables (e.g. /ba/, da/) in the tasks in quiet and with background noise. Extracted BOLD parameter estimates for each syllable will be entered into the analysis. Participant specific classification cross-validation accuracies (per pre-determined regions of interest) will be contrasted between conditions to determine regions of interest in which representations are enhanced or degraded by increasing task demands. Regions with significant group-level classification accuracies in each task, as well as regions of interest showing task-dependent changes in classification accuracies, will be established by permutation testing for each region of interest for each participant.
Time frame: 1 Session (up to 2 hours)
Psychophysiological Interactions
Psychophysiological interaction (PPI) analyses evaluate task-dependent interactions between brain regions. Each pre-determined region of interest will serve as a seed region. For each target region (all other regions of interest), a general linear model will be used to estimate the interaction of task-related hemodynamic effects and the effects that are linearly related to the time-series of the seed region. Significant interactions reflect regions for which the effective connectivity with the seed-region changes as a function of task condition (i.e., indicating regions that are preferentially coupled for a specific task). Study-level outcomes will assess main effects of group (children who stutter vs controls), group by region interactions, and group by network (auditory, speech motor, attention) interactions.
Time frame: 1 Session (up to 2 hours)