Patients with hearing loss who use cochlear implants (CIs) show significant deficits and strong unexplained intersubject variability in their perception and production of spoken emotions in speech. This project will investigate the hypothesis that "cue-weighting", or how patients utilize the different acoustic cues to emotion, accounts for significant variance in emotional communication with CIs. The results will focus on children with CIs, but parallel measures in postlingually deaf adults with CIs will be made, ensuring that results of these studies benefit social communication by CI patients across the lifespan by informing the development of technological innovations and improved clinical protocols.
Emotion communication is a fundamental part of spoken language. For patients with hearing loss who use cochlear implants (CIs), detecting emotions in speech poses a significant challenge. Deficits in vocal emotion perception observed in both children and adults with CIs have been linked with poor self-reported quality of life. For young children, learning to identify others' emotions and express one's own emotions is a fundamental aspect of social development. Yet, little is known about the mechanisms and factors that shape vocal emotion communication by children with CIs. Primary cues to vocal emotions (voice characteristics such as pitch) are degraded in CI hearing, but secondary cues such as duration and intensity remain accessible to patients. It is proposed that individual CI users' auditory experience with their device plays an important role in how they utilize these different cues and map them to corresponding emotions. In previous studies, the Principal Investigator (PI) and the PI's team conducted foundational research that provided valuable information about key predictors of vocal emotion perception and production by pediatric CI recipients. The work proposed here will use novel methodologies to investigate how the specific acoustic cues used in emotion recognition by CI patients change with increasing device experience (Aim 1) and how the specific cues emphasized in vocal emotion productions by CI patients change with increasing device experience (Aim 2). Studies will include both a cross-sectional and a longitudinal approach. The team's long-term goal is to improve emotional communication by CI users. The overall objectives of this application are to address critical gaps in knowledge by elucidating how cue-utilization (the reliance on different acoustic cues) for vocal emotion perception (Aim 1) and production (Aim 2) are shaped by CI experience. The knowledge gained from these studies will provide the evidence-base to support the development of clinical protocols that support emotional communication by pediatric CI recipients, and will thus benefit quality of life for CI users. The hypotheses to be tested are: \[H1\] that cue-weighting accounts significantly for inter-subject variations in vocal emotion identification by CI users; \[H2\] that optimization of cue-weighting patterns is the mechanism by which predictors such as the duration of device experience and age at implantation benefit vocal emotion identification; and \[H3\] that in children with CIs, the ability to utilize voice pitch cues to emotion, together with early auditory experience (e.g., age at implantation and/or presence of usable hearing at birth) account significantly for inter-subject variation in emotional productions. The two Specific Aims will test these hypotheses while taking into account other factors such as cognitive and socioeconomic status, theory of mind, and psychophysical sensitivity to individual prosodic cues. This is a prospective design involving human subjects who are children and adults. The participants will perform two kinds of tasks: 1) listening tasks in which participants listen to speech or nonspeech sounds and make a judgment about it, interacting with a software program on a computer screen; and 2) speaking tasks, in which participants will read aloud a list of simple sentences in a happy way and a sad way or converse with a member of the research team, in which participants retell a picture book story or describe an activity of their choosing. Participants' speech will be recorded, analyzed for its acoustics, and also used as stimuli for listening tasks. In addition to these tasks, participants will also be invited to perform tests of cognition, vocabulary, and theory of mind. Participants will not be assigned to groups, and no control group will be assigned, in any of the Aims. In parallel with cochlear implant patients, the team will test normally hearing listeners spanning a similar age range to provide information on how the intact auditory system processes emotional cues in speech in perception and in production. Effects of patient factors such as their hearing history, experience with their cochlear implant, and cognition will be investigated using regression-based models. All patients will be invited to participate in all studies, with no assignment, until the sample size target is met for the particular study. The order of tests will be randomized as appropriate to avoid order effects.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
BASIC_SCIENCE
Masking
NONE
Enrollment
255
Using novel methodologies and stimuli comprising both controlled laboratory recordings and materials culled from databases of ecologically valid speech emotions (e.g., from publicly available podcasts), the team aims to collect perceptual data to build a statistical model to test the hypothesis that experience-based changes in emotion identification by pediatric and adult CI recipients is mediated by improvements in cue-optimization.
The team will acoustically analyze vocal emotion productions by participants, quantify acoustic features of spoken emotions, and obtain behavioral measures of how well normally hearing listeners can identify those emotions.
Arizona State University
Tempe, Arizona, United States
RECRUITINGHouse Institute Foundation
Los Angeles, California, United States
RECRUITINGNorthwestern University
Evanston, Illinois, United States
NOT_YET_RECRUITINGBoys Town National Research Hospital
Omaha, Nebraska, United States
RECRUITINGVocal emotion recognition accuracy
Percent correct scores in vocal emotion recognition
Time frame: Years 1-5
Vocal emotion recognition sensitivity
Sensitivity (d's) in vocal emotion recognition
Time frame: Years 1-5
Voice pitch (fundamental frequency) of vocal productions
Voice pitch (Hz) measured from acoustic analyses of recorded speech
Time frame: Years 1-5
Intensity of vocal productions
Intensity (decibel units) measured from acoustic analyses of recorded speech
Time frame: Years 1-5
Duration of vocal productions
Duration (1/speaking rate) measured from acoustic analyses of recorded speech
Time frame: Years 1-5
Recognition of recorded speech emotions by listeners -- percent correct scores
Accuracy and associated d's (sensitivity measure) in listeners' ability to identify the emotions recorded in participants' speech
Time frame: Years 1-5
Recognition of recorded speech emotions by listeners -- d' values (sensitivity measure)
Sensitivity (d's based on hit rates and false alarm rates) in listeners' ability to identify the emotions recorded in participants' speech
Time frame: Years 1-5
Reactions times (seconds) for vocal emotion identification
Time between the end of the stimulus recording and the response (button press)
Time frame: Years 1-5
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.