This study examines whether individual differences in how speakers respond to hearing versus physical sensation during speech can predict who benefits most from visual feedback during a speech task. Healthy adults will complete a series of tasks in which auditory feedback is altered in real time through headphones, with and without an added visual display of the speech signal. A computational model will be used to estimate how strongly each participant relies on hearing versus physical sensation when monitoring speech. The study will then test whether this individual profile predicts how much the visual display improves each participant's ability to respond to the altered feedback.
During speech, the brain relies on multiple sources of feedback, including auditory input and physical sensations from the tongue and lips, to monitor and adjust speech in real time. People differ in how much they rely on each of these feedback sources, and these individual differences may predict who benefits most from different types of technological support for speech learning. This study examines whether a computational model of individual response to sensory feedback can predict how much a person benefits from the addition of real-time visual feedback during a speech task. Participants will complete a series of tasks in which auditory feedback of speech is altered in real time through headphones. A computational model is then used to estimate how strongly each participant relies on hearing versus physical sensation when monitoring speech. Participants then repeat the task with an added visual display showing the speech signal alongside a target, providing an additional source of feedback. The primary question is whether the computational profile of sensory feedback use - specifically, whether a person relies more heavily on hearing or on physical sensation - predicts how much the visual display improves each participant's ability to respond to the altered feedback. At baseline, participants complete two versions of an altered auditory feedback (AAF) task without visual feedback. The first uses a "fast adapt" design in which altered feedback is introduced and withdrawn repeatedly across short experimental runs; the second uses a standard adaptive design in which the altered feedback is introduced once and maintained for a longer run. Performance in both tasks is submitted to SimpleDIVA computational modeling to estimate participant-specific auditory and somatosensory response parameters. The use of two task variants allows evaluation of the stability of parameter estimates across elicitation conditions. Participants then complete a standard adaptive AAF task with simultaneous real-time visual-acoustic biofeedback. The visual display presents the current audio playback signal (i.e., the altered signal) alongside a visual target derived from the participant's baseline production. The primary outcome measure is visual gain, defined as the within-participant difference in compensation magnitude between the auditory-visual feedback condition and the auditory-only condition of the standard adaptive AAF task. The primary analysis uses linear regression to test whether SimpleDIVA parameters from the baseline phase predict the magnitude of visual gain. The predictor of primary interest is the ratio of auditory to total sensory weighting, reflecting the relative dominance of auditory versus somatosensory feedback. Baseline compensation magnitude in the auditory-only condition will be included as a covariate. A paired-samples t-test will also evaluate the overall effect of adding visual feedback on compensation magnitude across the sample.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
BASIC_SCIENCE
Masking
NONE
Enrollment
40
Participants produce speech while hearing real-time altered auditory feedback delivered through headphones. Two task variants are administered: a fast-adapt design in which the altered feedback is introduced and withdrawn repeatedly, and a standard adaptive design in which the altered feedback is introduced once and maintained for an extended run. Administered to all participants as the baseline condition.
Participants perform the standard adaptive auditory feedback task with the addition of real-time visual display. The visual display presents the altered auditory signal alongside a visual target derived from the participant's baseline production. Administered to all participants following the auditory-only baseline phase.
New York University
New York, New York, United States
Mean difference in F1 compensation magnitude between auditory-visual and auditory-only feedback conditions of standard adaptive feedback task
Visual gain is defined as the within-participant difference in acoustic compensation magnitude (measured from formant frequencies) between the auditory-visual feedback condition and the auditory-only condition of the standard adaptive feedback task. Greater visual gain indicates a larger compensatory response when visual feedback is available.
Time frame: During study visit (Day 1)
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.