How do we know what's important to look at in the environment? Sometimes, we need to look at objects because they are 'salient' (for example, bright flashing lights of a police car, or the stripes of a venomous animal), while other times, we need to ignore irrelevant salient locations and focus only on locations we know to be 'relevant'. These behaviors are often explained by the use of 'priority maps' which index the relative importance of different locations in the visual environment based on both their salience and relevance. In this research, we aim to understand how these factors interact when determining what's important to look at. Specifically, we are evaluating the extent to which the visual system considers locations that are known to be irrelevant when considering the salience of objects. We're testing the hypothesis that the visual system always computes maps of salient locations within 'feature maps', but that activity from these maps is not read out to guide behavior for task-irrelevant locations. We'll have people look at displays containing colored shapes and/or moving dots and report aspects of the visual stimulus (e.g., orientation of a line within a particular stimulus). We'll measure response times across conditions in which we manipulate the presence/absence of salient distracting stimuli and provide various kinds of cues about the potential relevance of different locations on the screen. The rationale is that by measuring changes in visual search behavior (and thus inferring computations performed on brain representations), we will determine how these aspects of simplified visual environments impact the brain's representation of important object locations. This will support future studies using brain imaging techniques aimed at identifying the neural mechanisms supporting the extraction of salient and relevant locations from visual scenes, which can inform future diagnosis/treatment of disorders which can impact our ability to perform visual search (e.g., schizophrenia, Alzheimer's disease).
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
BASIC_SCIENCE
Masking
NONE
Enrollment
50
The location of the target item in the display will be varied across trials (appear left, right, up, or down)
A proportion of all trials will contain a task-irrelevant, singleton distractor defined in a non-target dimension (e.g., color target and motion distractor)
Varied across trials, the validity of the cue will be determined by the match or mismatch between direction of the visual cue (an arrowhead around the fixation pointing to the right, left, up, or down) and actual target location
University of California, Santa Barbara
Santa Barbara, California, United States
Behavioral response (button press)
Participants will be required to report the orientation of a line (horizontal or vertical) within the target via a speeded button press. The specific values of color, shape, and motion will vary randomly from trial to trial. Participants will complete separate sessions with each session directing participants to search for a different target feature dimension.
Time frame: Through study completion, an average of two weeks
Gaze position
The investigators will use the measured gaze position in (x,y) coordinates to verify stable fixation throughout the experiment. The data will be used to establish gaze fixation and/or track where participants look as they perform the visual search task. Trials with poor fixation performance may be excluded from further analyses.
Time frame: Through study completion, an average of two weeks
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.