This study aimed to compare the patient acceptability (preference, length, and difficulty) and accuracy of Chat-Generative Pre-Trained Transformer (ChatGPT) responses to questions from people with osteoarthritis (OA) with physician responses.
This was a cross-sectional study where participants were invited by e-mail to participate in the questionnaire to compare Chatbot responses to physician responses.
Study Type
OBSERVATIONAL
Enrollment
286
Canisius Wilhelmina Hospital
Nijmegen, Netherlands
Preferred response
Binary outcome, chatbot or physician response. This was an average based on the preferences on the 7 FAQs.
Time frame: From invitation until the end of the study at two weeks
Rating of length
Rating of both responses (chatbot and physician response) included 'too short', 'good', and 'too long'.
Time frame: From invitation until the end of the study at two weeks
Rating of difficulty
Rating of difficulty included 'too easy', 'good', and 'too difficult' for both responses.
Time frame: From invitation until the end of the study at two weeks
Accuracy
Accuracy of the responses included the options 'Completely incorrect,' 'partly incorrect, ' 'approximately equally correct and incorrect,' 'mostly correct,' 'completely correct'.
Time frame: From invitation until the end of the study at two weeks
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.