This study meets the NIH definition of a clinical trial, but is not a treatment study. Instead, the goal of this study is to investigate how hearing ourselves speak affects the planning and execution of speech movements. The study investigates this topic in both typical speakers and in patients with Deep Brain Stimulation (DBS) implants. The main questions it aims to answer are: * Does the way we hear our own speech while talking affect future speech movements? * Can the speech of DBS patients reveal which brain areas are involved in adjusting speech movements? Participants will read words, sentences, or series of random syllables from a computer monitor while their speech is being recorded. For some participants, an electrode cap is also used to record brain activity during these tasks. And for DBS patients, the tasks will be performed with the stimulator ON and with the stimulator OFF.
Study Type
INTERVENTIONAL
Allocation
RANDOMIZED
Purpose
BASIC_SCIENCE
Masking
NONE
Enrollment
507
The intervention consists of manipulating real-time auditory feedback during speech production. In our lab, such feedback perturbations can be implemented with either a stand-alone digital vocal processor (a device commonly used by singers and the music industry) or with software-based signal processing routines (see Equipment section for details). Note that the study does not investigate the efficacy of these hardware or software methods to induce behavioral change in subjects' speech. Rather, the study addresses basic experimental questions regarding the general role of auditory feedback in the central nervous system's control of articulatory speech movements.
The intervention consists of manipulating real-time visual feedback during upper limb reaching movements. In our lab, such feedback perturbations can be implemented with a virtual reality display system.
Patients who have been previously implanted with a DBS stimulator for their clinical care will be tested in two speech motor learning tasks with the stimulation ON and with the stimulation OFF. Note that (1) patients routinely turn the stimulation OFF and back ON (examples are, for some patients, to sleep, to save battery, etc), and (2) we are not in any way evaluating the stimulator itself or its clinical effectiveness but only whether or not two forms of speech motor learning (adaptation to auditory feedback perturbation and speech sequence learning) are affected differently by having the stimulation ON or OFF. implant ON/OFF prior to participation in the speech auditory-motor learning tasks and speech sequence learning tasks. This intervention can be implemented by the subject themselves as all patients have a hand- held controlled that they use to switch stimulation ON/OFF.
University of Washington
Seattle, Washington, United States
RECRUITINGSpeech formant frequencies
The frequencies of the subject's first two formants (F1, F2) for each test word will be measured from spectrographic displays with overlaid Linear Predictive Coding formant tracks.
Time frame: Measurements will be made only from acoustic recordings made during the test session (~1 hour).
Reach direction for arm movements
Measuring initial reach direction for arm movements allows us to measure the direction that was planned before movement onset.
Time frame: Outcome measures will be made only during a single data recording session (~2 hours).
Amplitude of long-latency auditory evoked potentials (from EEG recordings) responses
Amplitude of the N1 component (in microvolt) will be measured in response to both probe tones and to a subject's own speech onset.
Time frame: Measurements will be made only from electroencephalography (EEG) recordings made during the test session (~2 hours).
Local field potentials recorded by neural implants
Local field potentials (LFPs) will be recorded by the PerceptPC DBS implants and used to measure changes in power spectrum density across different phases of the tasks. Additionally, LFPs will be used to conduct event-related analyses.
Time frame: Measurements will be made only from DBS implant recordings made during the test session (~1-2 hours).
Temporal measures of speech syllable sequence learning
1\. Speech onset time (in milliseconds); 2. Average syllable duration (in milliseconds)
Time frame: Outcome measures will be made only during a single data recording session (~0.5 hours)
Accuracy during speech syllable sequence learning
Sequence accuracy (in percent)
Time frame: Outcome measures will be made only during a single data recording session (~0.5 hours)
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.