The purpose of this study is to understand how the sensory and motor areas of the brain work together to keep a person's hand movements accurate (sensorimotor learning). The investigators hope this information may be useful one day to improve rehabilitation techniques in patients with brain lesions.
Human perception of hand position is multisensory. The brain can estimate it visually, from an image on the retina, and proprioceptively, from receptors in the joints and muscles. The sensory inputs determining these percepts are subject to changes in environmental factors (e.g., lighting) and internal factors (e.g., movement history). Multisensory integration of visual and proprioceptive estimates gives us flexibility to cope with such changes. For example, washing dishes with the hands immersed in water creates a spatial misalignment between vision and proprioception, as water refracts light. The brain resolves this conflict by realigning visual and/or proprioceptive estimates of hand position, and also by adjusting motor commands (visuomotor adaptation). The neural basis of these adaptive processes is poorly understood. The purpose of this study is to find out if multisensory and visuomotor learning are accompanied by changes in resting state connectivity between sensory regions of the brain and other areas. The first session is a familiarization session for functional magnetic resonance imaging (fMRI) and the behavioral task, and is expected to last 30-40 minutes. Subjects will first fill out screening forms to confirm the answers given during the initial screening, and the Edinburgh handedness inventory to quantify their handedness. If subjects are still eligible, subjects will lie in a mock scanner and perform the functional task: Subjects will have their left index finger taped to a wooden stick, and an experimenter from the team will manipulate the finger with the stick outside of the scanner. Subjects will respond to the different movements by pressing buttons with their right hand. Subjects will also be introduced to the behavioral task, which is performed at an apparatus in the room next to the scanner: Subjects sit in front of a touchscreen and point to targets seen in a mirror. If subjects are interested in moving on to the main session at this point, the main session will be scheduled. The main session will take about 2 hours. Subjects will first fill out the MR safety screening form. Subjects will then perform some practice trials of the behavioral task to remind the subject of the task. This will be followed by the first resting state scan (12 min), a 20-30 minute baseline block of the behavioral task (no learning), a second resting state scan (12 min), the 20-30 minute learning block of the behavioral task, and a third resting state scan (12 min). Finally, the subject will do the functional task in the scanner (same as familiarization session, 12 min. total) and an anatomical scan (\~6 minutes). The session will conclude with some questions about the subject's subjective experience of the procedures.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
BASIC_SCIENCE
Masking
NONE
Enrollment
76
Reaching task with visual feedback offset from target finger position.
Indiana University Bloomington
Bloomington, Indiana, United States
Resting-state PMv-M1 Functional Connectivity (Fisher Z-transformed Correlation)
Brain activity was measured during a 12-minute resting-state functional magnetic resonance imaging (fMRI) scan. Resting-state functional connectivity between the ventral premotor cortex (PMv) and primary motor cortex (M1) was quantified as the Pearson correlation coefficient between the mean blood-oxygen-level-dependent (BOLD) time series extracted from anatomically defined PMv and M1 regions of interest. Correlation coefficients were Fisher z-transformed to improve normality. For each participant, connectivity values were averaged across all scans acquired during the main session. The outcome measure is the mean Fisher z-transformed correlation value (unitless). Numbers range from -1 (opposite activity in the two regions) to 1 (similar activity in the two regions), with 0 implying no relationship between the two regions.
Time frame: 3 scans during the main session (2 hours)
Weighting of Vision vs. Proprioception
Measured by comparing where subjects point on the touchscreen when they are estimating visual vs. proprioceptive targets. A person who relies only on vision would have a weighting value of 100%. A person who relies only on proprioception would have a weighting of 0%. A value of 50% would imply equal reliance on vision and proprioception.
Time frame: Measured during the main session (2 hours)
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.