The goal of this study is to evaluate the performance of large language models i.e. ChatGPT, in making preoperative visit sheets using clinical records. The main questions it aims to answer are: * Can large language models read clinical records and make preoperative visit sheets same as physicians? * Can physicians distinguish preoperative visit sheets made by physicians and models? Participants' records will be generated using ChatGPT-4, and read by both ChatGPT-4 and physicians to make 2 separate preoperative visit sheets, and form 2 groups, GPT group and physician group respectively. A group of professionals will compare result of above mentioned 2 groups to see if ChatGPT can afford to write preoperative visit sheets.
Study Type
OBSERVATIONAL
Enrollment
120
Subjects' clinical records will be read by ChatGPT-4 to write preoperative visit sheets.
Subjects' clinical records will be read by clinicians to write preoperative visit sheets.
Familiarity to human writing
Based on professionals' subjective evaluation, familiarity is a binary result indicating weather professionals think the preoperative visit sheet was written by physicians or ChatGPT, where 0 indicates "not written by human" and 1 indicates "written by human".
Time frame: Each record will be screened for 6 minutes at most and results will be given in 8 minutes after receiving a record.
Satisfaction for clinical use
A grading score of 1-10 indicating weather the professionals think the preoperative visit sheet is suitable in clinical use.
Time frame: Each record will be screened for 6 minutes at most and results will be given in 8 minutes after receiving a record.
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.