The proposed project seeks to provide object recognition as a feature in a retinal implant system. Participants will be able to direct an object recognition application to find a desired object in the field of view of the head-mounted camera, and to direct the participant's view towards it through the presentation of a recognizable icon. A prototype system will be developed and evaluated in human subjects in phase I. A full system implementation and a second phase of the trial will be completed in phase II.
The investigators propose to add an object-finding feature to a retinal prosthesis system. To use this feature, the participant will enable a special mode and input the desired object from a set of pre-programmed object types. Imagery from the visible light camera in the system eyeglasses will be processed using object recognition software as the participant scans their head across the room scene. When the object is identified in the scene by the processor, a flashing icon will be output to the epiretinal array in the appropriate position to guide the participant to the physical location of the object. Once located, the system will track the location of the object. There will be two phases to the human subjects evaluation, each run initially through simulations in sighted human subjects, followed by tests in Argus II participants. In phase 1, system evaluation in human subjects at Johns Hopkins UNiversity (JHU) will explore performance in representative tasks and compare prosthetic visual performance without and with the new object finding feature. An important aspect of the evaluation will be the comparison of different icons and presentation modes to assist participants in locating and reaching objects. In phase 2, the system will be integrated into the Argus II video processing unit (VPU), and JHU will conduct human trials that include functional testing of the integrated prototype in representative environments and optimizing the ergonomics of the system, e.g. simultaneous finding and tracking of multiple objects/icons.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
DEVICE_FEASIBILITY
Masking
NONE
Enrollment
9
The object recognition subsystem is an add-on to the Argus II retinal prosthesis system. in the early stage of the study the subsystem will run on a separate processor; in the later stage the subsystem will run in the Argus II user's video processing unit.
Johns Hopkins Hospital
Baltimore, Maryland, United States
Performance (Completion Time) Locating a Cell Phone and a Person
This outcome measure compares time to task completion without and with modalities of the subsystem for both a stationary and mobility task. For the stationary task, participants were seated in front of a table and a cell phone was placed randomly at the center of one of ten rectangular zones. Participants were asked to find and put their hand on the location of the cell phone. The time of the response was recorded (and the distance from the cell phone, which is a separate primary outcome). For the mobility task, participants were asked to find a target person in an otherwise empty room with dark walls. Once the participant got within arm's length, the target person would initiate a handshake. The time and number of steps (a separate primary outcome) were recorded. If the person wasn't found within 5 minutes, the task was stopped and scored as incomplete.
Time frame: Time in seconds to complete task. Stationary task time was time to placing hand on the table with the cell phone and mobility task time was time to handshake with the target person.
Accuracy (Distance From Target)
This outcome measure compares task completion (accuracy to a target) without and with modalities of the subsystem for both a stationary and mobility task. For the stationary task, participants were seated in front of a table and a cell phone was placed randomly at the center of one of ten rectangular zones. Participants were asked to find and put their hand on the location of the cell phone. The distance to the cell phone in centimeters was recorded. For the mobility task, participants were asked to find a target person in an otherwise empty room with dark walls. Once the participant got within arm's length, the target person would initiate a handshake. The number of steps were recorded. If the person wasn't found within 5 minutes, the task was stopped and scored as incomplete.
Time frame: Distance to cell phone required up to 30 minutes per mode and distance to a person required up to 45 minutes per mode.
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.