The goal of this study is to see whether this type of Artificial Intelligence (AI) Voice Assistant can reliably capture patient-reported health information to support communication between patients and their healthcare team.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
DIAGNOSTIC
Masking
NONE
Enrollment
50
Pilot testing of speech-to-text tool using artificial intelligence
Stanford University
Palo Alto, California, United States
Accuracy of AI Voice Assistant Compared With Expert Extraction
Agreement between AI-generated summaries and expert-reviewed responses (study staff extraction of participants' responses from audio recordings). The responses will be scored on a numerical scale, ranging from 0-17 points with higher points representing greater accuracy of the AI-generated summaries compared to expert-reviewed responses.
Time frame: ~3 months
Usability of AI Voice Assistant
Measured using the System Usability Scale (SUS), a validated 10-item questionnaire rated on a 5-point Likert scale, with total scores ranging from 0-100
Time frame: ~3 months
Agreement between AI Voice Assistant Compared With Patient Self-Reported Written Responses
We will compare agreement the AI Voice Assistant summary to patient-reported responses on the same data elements, including vital signs (heart rate, blood pressure, weight) and 14 symptom and health status questions. The responses will be scored on a numerical scale, ranging from 0-17 points with higher points representing higher agreement of the AI-generated summaries compared to expert-reviewed responses.
Time frame: ~3 months
Symptom Scores
Symptom scores based on the responses to questions obtained from AI Voice Assistant and written responses
Time frame: ~3 months
Qualitative Feedback
Open-ended questions inquiring about participants' likes, dislikes, and suggestions for improvements
Time frame: ~3 months
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.