The goal of this prospective observational study is to evaluate the ability of three large language models (ChatGPT-4o, Gemini Advanced, and Claude 3.7) to support diagnosis and treatment decision-making in adult patients presenting with common endodontic conditions. The main questions the study aims to answer are: Can LLMs accurately determine the endodontic diagnosis when provided with structured clinical information and periapical radiographs? Can LLMs propose appropriate treatment plans comparable to decisions made by endodontic specialists? To answer these questions, researchers will compare the diagnostic and treatment accuracy of three AI models using a consensus diagnosis from endodontic specialists as the reference standard. Participants will: Receive routine endodontic examination and periapical radiographs as part of standard clinical care. Have their anonymized clinical histories and radiographs entered into the three AI models. Not interact directly with any AI system; all evaluations will be performed by the research team. This study aims to understand how large language models perform under real-world clinical conditions and whether these systems may play a supportive role in endodontic diagnostics in the future.
This prospective observational study aims to evaluate the real-time diagnostic and treatment decision-making performance of three large language models-ChatGPT-4o, Gemini Advanced, and Claude 3.7-in an endodontic clinical setting. A total of 120 patients presenting to the endodontic clinic were examined, and detailed medical/dental histories, clinical findings, and periapical radiographs were collected. Each anonymized case was then presented to the three LLMs using a standardized prompt asking for the diagnosis and the appropriate treatment plan. All models were used in their default multimodal configurations without enabling web-search functions, plug-ins, or external data retrieval. Each question was submitted only once in isolated chat sessions to prevent memory carry-over. Responses were saved verbatim and compared with the reference diagnoses and treatment plans established by a panel of endodontic specialists. This study was designed to mimic real-world clinical conditions as closely as possible, providing a realistic assessment of how these systems might perform when used by clinicians in everyday practice. Understanding their capabilities and limitations in authentic clinical scenarios is essential, as LLMs are expected to play an increasingly vital role in future dental care particularly in decision support, triage, and patient education. By identifying where these models perform well and where they fall short, this research aims to inform safe and effective clinical integration as LLM technologies continue to advance.
Study Type
OBSERVATIONAL
Enrollment
120
Participants' anonymized clinical information, including structured patient history and periapical radiographs, was used as input for three large language models (ChatGPT-4o, Gemini Advanced, Claude 3.7). The models were asked to determine the endodontic diagnosis and propose an appropriate treatment plan. No treatment, device, or drug was administered to participants. The intervention consists solely of AI-based interpretation of pre-existing clinical data.
Faculty of Dentistry, Marmara University
Maltepe, Istanbul, Turkey (Türkiye)
Clinician Diagnosis Accuracy Based on Paper-Based History and Periapical Radiograph
Assessment of the diagnostic decision made by endodontic clinicians after reviewing a paper-based patient history form and a standardized periapical radiograph. Accuracy is determined by comparing the clinician's diagnosis with the consensus diagnosis established by three independent endodontic specialists. Data will be collected for all 120 patients at the time of initial clinical evaluation.
Time frame: 7 july-5 august
LLM-Generated Diagnosis and Treatment Planning Performance
Evaluation of diagnostic and treatment recommendations generated by large language models (LLMs)-ChatGPT-4o, Gemini Advanced, and Claude 3.7-after receiving the same paper-based patient history and periapical radiograph provided to clinicians. LLM responses will be compared to the gold-standard specialist consensus for both diagnosis and treatment decisions.
Time frame: august-september
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.