Large language models (LLMs) show promise in medicine, but concerns about their accuracy, coherence, transparency, and ethics remain. To date, public perceptions on using LLMs in medicine and whether they play a role in the acceptability of health care applications of LLMs are not yet fully understood. This study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.
Owing to rapid advances in artificial intelligence, large language models (LLMs) are increasingly being used in a variety of clinical settings such as triage, disease diagnosis, treatment planning, and self-monitoring. Despite their potential, the use of LLMs remains restricted within healthcare settings due to lack of accuracy, coherence, and transparency and ethical concerns. Public perceptions such as perceived usefulness and risks play a crucial role in shaping their attitudes towards artificial intelligence that can either facilitate or hinder its adoption. Yet, to our knowledge, there is lack of awareness about perception-driven interventions in health care and no previous studies have examined whether public perceptions play a role in the acceptability of medical applications of LLMs. Hence, this study aims to investigate public perceptions on using LLMs in medicine and if interventions for perceptions affect the acceptability of health care applications of LLMs.
Study Type
INTERVENTIONAL
Allocation
RANDOMIZED
Purpose
OTHER
Masking
SINGLE
Enrollment
3,000
Participants allocated to the intervention group received perception-based interventions. Interventions for Groups 1-3 were perceived benefits of LLMs in medicine, perceived racial bias in LLMs in medicine, and perceived ethical conflicts in LLMs in medicine, respectively.
Jue Liu
Beijing, Beijing Municipality, China
Number of participants who will change their attitudes towards medical applications of large language models
Public acceptance of applying large language models to medicine will be categorized into yes, not sure, and no, which will be collected before perception-based interventions and after interventions.
Time frame: Through study completion, an average of 1 year
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.