The goal of this research study is to learn how the brain areas that plan and control movement interact with the areas responsible for hearing and perceiving speech in healthy adults and people who have had cerebellar strokes. The main questions it aims to answer are: 1. What regions of the brain's sensory systems show changes in their activity related to speech? 2. To what extent do these regions help listeners detect and correct speech errors? 3. What is the role of the cerebellum (a part of the brain in the back of the head) in these activities? Participants will be asked to complete several experimental sessions involving behavioral speech and related tests and non-invasive brain imaging using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI).
This study aims to provide an integrated view of brain systems underlying predictive coding in speech with unprecedented detail using ultra-high field (7 Tesla) functional magnetic resonance imaging. The overall approach is a condition-intensive within-subjects design, with extensive sampling of individual participants, including a group who have had strokes impacting the cerebellum, across multiple sessions. Participants will be asked to complete up to 6 sessions. Passing a hearing assessment using standard audiological procedures, conducted at the start of the first session, is a requirement for participation. The experimental sessions involve behavior and non-invasive brain imaging. Investigators will ask participants to perform several short tasks to measure different aspects of their speech production and speech perception (e.g., reading passages or words aloud, making judgements about sounds). In one session, Investigators will measure electroencephalography (EEG) while participants complete tasks involving producing and hearing speech sounds. Participants will be fitted with an elastic cap and up to 32 non-invasive recording electrodes. In other sessions, investigators will measure structural and functional magnetic resonance imaging (fMRI). Structural images demonstrate the unique brain anatomy of the participant. Functional images will be obtained while the participant completes specific tasks involving listening, speaking, or completing other motor actions (e.g., pressing a button). All participants will be screened for MRI risk factors prior to each session.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
BASIC_SCIENCE
Masking
NONE
Enrollment
100
Measuring speech-related brain activity using fMRI during a speech listening task.
Measuring speech-related brain activity using fMRI during a silent articulation task.
Measuring speech-related brain activity using fMRI during self-generated vs. externally-generated speech.
Measuring electroencephalography (EEG) based evoked potentials for self vs. externally generated speech
Measuring speech-related brain activity using fMRI during conditions that induce auditory speech errors.
Measuring brain activity using fMRI during a learning task with sustained altered auditory feedback.
Behavioral measurements of speech during reading passages and words
Measurements of auditory acuity during listening tasks.
Mapping of brain areas using fMRI during learning of non-speech sound-evoking movements.
University of Pittsburgh
Pittsburgh, Pennsylvania, United States
RECRUITINGBlood oxygenation level dependent (BOLD) responses to self vs. externally generated speech
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will contrast measured activations in regions of interest for the LISTEN-SELF vs. PRODUCE and LISTEN-OTHER vs. PRODUCE conditions. Encoding models will predict activity in regions-of-interest (ROIs) based on a set of speech features.
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
BOLD responses related to pre-speech auditory modulation
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will contrast measured activations in regions of interest for responses to auditory stimuli across conditions (e.g., SPEAK, REHEARSE, PLAN, SILENT).
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
EEG responses to self vs. externally generated speech
The dependent variables are evoked responses, aligned to sound onset, measured with EEG during task performance. We will contrast evoked responses across conditions (e.g., TALK, LISTEN).
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
BOLD responses to induced auditory errors
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will determine activations in regions of interest that correlate with applied perturbations during speech. We will also compare SPEAK vs. LISTEN activations in perturbed and unperturbed conditions.
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
BOLD responses during adaptation to auditory perturbations
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will contrast measured activations in regions of interest for responses during the HOLD and BASELINE phases of the adaptation paradigm. We will determine areas where activation is associated with changes in formant frequencies in early and late windows in speech recordings.
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
BOLD responses during learning of non-speech auditory motor targets
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will contrast measured activations in regions of interest for responses during PRESS trials across runs. We will contrast LISTEN vs. PRESS trials to measure motor induced sensory modulation.
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
BOLD responses to speech listening task
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will contrast measured activations for the SPEECH vs. signal correlated noise (SCN) and SPEECH vs. SILENT conditions.
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
BOLD responses to silent articulation task
The dependent variables (across voxels) are blood oxygenated level dependent fMRI measurements made during task performance. We will contrast measured activations for silent articulation vs. a resting baseline condition.
Time frame: One session lasting 2-3 hours, within 12 months of enrollment
Speech formant frequencies
We will measure participant-specific phonetic variables (formant frequencies) based on participant speech from reading passages and word production.
Time frame: First session lasting 2-3 hours, within 12 months of enrollment
Spontaneous Speech Synchronization Index
We will measure the Spontaneous Speech Synchronization Index based on behavioral speech data.
Time frame: First session lasting 2-3 hours, within 12 months of enrollment
Auditory acuity
We will measure auditory acuity (just noticeable difference) for changes in formant frequencies based on behavioral speech samples.
Time frame: First session lasting 2-3 hours, within 12 months of enrollment
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.