The goal of this clinical trial is to learn how artificial intelligence (AI) may help doctors make diagnoses in kidney medicine. The researchers want to know whether an AI tool called a large language model (LLM) can help doctors choose the correct diagnosis more often and feel more confident in their answers. Before starting the study, the research team tested several AI models and chose one of the best performers, a GPT-5-class model set to use high reasoning effort. The main questions this study aims to answer are: 1. Do doctors make more correct diagnoses when they can see AI suggestions? 2. Does seeing AI suggestions change how confident doctors feel about their diagnosis? Researchers will compare doctors who receive AI suggestions with doctors who do not receive AI suggestions to see how the AI affects accuracy, confidence, and decision-making. Participants will complete up to 10 online clinical cases. For each case, they will: 1. Read a short medical scenario 2. Suggest up to three possible diagnoses (If in the AI group) Review the AI's suggestions and decide whether to change their answer The study will also look at how long participants take to answer each case and how the AI's performance compares to the human answers.
This study evaluates whether providing clinicians with real-time diagnostic suggestions from a high-reasoning large language model (GPT-5) improves diagnostic accuracy, confidence, and efficiency when solving nephrology clinical vignettes. Prior to selecting the model for the trial, the research team benchmarked several state-of-the-art models across a pilot set of nephrology cases, including: GPT-5, GPT-5-mini, O3, GPT-4o, Llama-4 Maverick-17B, Gemini-2.5-Pro, Qwen-3 VL-235B Thinking, DeepSeek-V3.2-Exp, MedGEMMA-27B, Claude Sonnet-4.5, and Magistral-Medium-2509. GPT-5 (high-reasoning) demonstrated the highest diagnostic performance, stability, and interpretability, and was selected as the AI system used in the intervention arm. Participants include medical students, residents, fellows, and practicing physicians. After creating an account, participants complete a demographic questionnaire (specialty, years of experience, practice type, age category, AI familiarity) and must explicitly agree to the use of these data for research purposes before accessing the vignettes. No directly identifying information is collected. Participants are randomized (with stratification by professional status) to either the AI-supported arm or the control arm. Each participant is assigned 10 nephrology vignettes in French or English and may complete them over multiple sessions. Once a vignette is submitted, it cannot be revisited ("no backtracking"). Completion time per vignette is automatically recorded. Control Arm Participants view each vignette and provide up to three diagnoses ("Top-3"), followed by a confidence rating (0-10). AI-Supported Arm Participants first enter an initial Top-3 diagnosis and confidence rating without AI assistance. The system then displays GPT-5's diagnostic suggestions, after which participants may revise their diagnoses once. The vignette is locked after submission. The study collects: * initial and final diagnoses, * confidence ratings before and (if applicable) after AI suggestions, * completion times, * participant demographic variables, * and the AI model's own diagnostic outputs. Partial completion is permitted; all completed vignettes contribute to the analysis. Primary and secondary outcomes include diagnostic accuracy (Top-3 and Top-1), accuracy improvement before vs. after AI, changes in diagnostic confidence, AI-induced diagnostic errors, human-versus-AI benchmarking, completion-time efficiency metrics, and the proportion of assigned vignettes completed. The primary analysis will compare diagnostic accuracy between the control arm (physicians alone) and the experimental arm (physicians assisted by the AI model). Accuracy is analyzed as a binary outcome (correct vs incorrect diagnosis). Because each participant evaluates multiple clinical vignettes, accuracy will be modeled using a mixed-effects logistic regression with a fixed effect for study arm and random intercepts for both participant and vignette. This approach accounts for clustering and varying difficulty across cases. The primary hypothesis test uses a two-sided α = 0.05. Effect sizes will be reported as odds ratios with 95% confidence intervals. Secondary analyses will explore whether accuracy varies by demographic factors (e.g., experience level, specialty) using interaction terms. Because each participant evaluates multiple vignettes, the team also performed simulation-based power analyses using mixed-effects logistic regression models with random intercepts for both participant and vignette, assuming an intra-participant ICC of 0.10. Under these assumptions, a total sample of 100 participants (50 per arm) with 10 vignettes per participant provides \>99% power to detect a clinically meaningful improvement in diagnostic accuracy. The investigators therefore plan to enroll approximately 100 participants overall. This study aims to quantify whether AI-augmented reasoning meaningfully improves diagnostic performance and decision-making when clinicians evaluate complex nephrology cases.
Study Type
INTERVENTIONAL
Allocation
RANDOMIZED
Purpose
DIAGNOSTIC
Masking
NONE
Enrollment
100
This intervention consists of displaying an AI-generated diagnostic suggestion during the clinical case-solving task. After reading each vignette, participants see the top diagnostic proposal produced by a large language model (GPT-5, high-reasoning configuration), selected after internal benchmarking. The AI suggestion appears once per vignette and cannot be requested again or modified. Participants may revise their diagnostic answer after viewing the suggestion, but they cannot return to the vignette later. No additional guidance, coaching, or interactive features are provided.
Lille University Hospital (online study)
Lille, France
RECRUITINGFinal diagnostic accuracy (top-3) with vs without AI support
For each participant, proportion of vignettes where the correct main diagnosis is included in the participant's final top-3 diagnoses. Compare final top-3 accuracy between the AI arm (after AI suggestions) and the control arm (no AI). Percentage of correctly diagnosed cases (top-3).
Time frame: From first vignette answered until the end of the study (up to 12 months).
Final diagnostic accuracy (top-1) with vs without AI support
For each participant, proportion of vignettes where the correct main diagnosis is included in the participant's final top-1 diagnoses. Compare final top-1 accuracy between the AI arm (after AI suggestions) and the control arm (no AI). Percentage of correctly diagnosed cases (top-1).
Time frame: From first vignette answered until the end of the study (up to 12 months).
Change in top-3 diagnostic accuracy before vs after AI suggestions (AI arm only)
In the AI-supported arm, participants first provide an initial answer (up to three diagnoses) without AI suggestions, then see AI-generated suggestions and may revise their answer once; they cannot return to that vignette later. For each participant, the investigators compute the difference in top-3 accuracy between initial and final answers across all completed vignettes. Percentage-point change in Top-3 diagnostic accuracy
Time frame: From first vignette answered until the end of the study (up to 12 months).
Change in top-1 diagnostic accuracy before vs after AI suggestions (AI arm only)
In the AI-supported arm, participants first provide an initial answer (up to three diagnoses) without AI suggestions, then see AI-generated suggestions and may revise their answer once; they cannot return to that vignette later. For each participant, the investigators compute the difference in top-1 accuracy between initial and final answers across all completed vignettes. Percentage-point change in Top-1 diagnostic accuracy
Time frame: From first vignette answered until the end of the study (up to 12 months).
Diagnostic confidence (0-10) before AI suggestions: Control vs AI arm
Participants in both arms rate their confidence (0-10 scale) in their Top-3 diagnostic proposal before any AI suggestions. In the AI arm, this is the "pre-AI" rating. In the Control arm, this is the single confidence rating (since no AI is shown). The investigators compare the pre-AI confidence between arms, aggregated across all completed vignettes per participant.
Time frame: From first vignette answered until the end of the study (up to 12 months).
Final diagnostic confidence (0-10) after AI suggestions: Control vs AI arm
Final diagnostic confidence (0-10 scale) in the Top-3 diagnostic proposal across all completed vignettes, compared between arms. In the AI arm, this is the post-AI confidence rating. In the Control arm, this is the same confidence rating (participants do not receive AI suggestions).
Time frame: From first vignette answered until the end of the study (up to 12 months).
Change in diagnostic confidence (0-10) before vs after AI suggestions (AI arm only)
In the AI arm, participants provide confidence ratings (0-10 scale) for their Top-3 diagnoses both before and after seeing AI suggestions. For each participant, the investigators compute the within-participant change (post-AI minus pre-AI) across all completed vignettes. Change in confidence score (0-10 scale)
Time frame: From first vignette answered until the end of the study (up to 12 months).
AI-induced diagnostic error (AI arm only)
Among completed vignettes where the participant's initial Top-1 diagnosis is correct, proportion for which the final Top-1 diagnosis becomes incorrect after AI suggestions.
Time frame: From first vignette answered until the end of the study (up to 12 months).
Change in Top-3 diagnosis after AI suggestions (AI arm only)
Among completed vignettes in the AI arm, the proportion where the Top-3 diagnosis differs between pre-AI and post-AI answers.
Time frame: From first vignette answered until the end of the study (up to 12 months).
Top-3 diagnostic accuracy: All human answers before AI vs AI accuracy
For each vignette, the Top-3 diagnostic accuracy of human participants before any AI suggestions (combining participants from both study arms at their pre-AI stage) is compared with the Top-3 diagnostic accuracy of the AI model for the same vignette. The reported Outcome is the accuracy difference, defined as AI Top-3 accuracy minus human pre-AI Top-3 accuracy, expressed in percentage points and computed at the vignette level across all completed vignettes. Percentage-point difference in Top-3 diagnostic accuracy
Time frame: From first vignette answered until the end of the study (up to 12 months).
Top-3 diagnostic accuracy: Human final answers after AI vs AI accuracy (AI arm only)
For each vignette completed in the AI-supported arm, the Top-3 diagnostic accuracy of human participants after viewing AI suggestions is compared with the Top-3 diagnostic accuracy of the AI model. (Top-3 accuracy is a single measure) The reported Outcome is the accuracy difference, defined as AI Top-3 accuracy minus human post-AI Top-3 accuracy, expressed in percentage points and computed at the vignette level across all completed vignettes in the AI arm. Percentage-point difference in Top-3 diagnostic accuracy between AI and human
Time frame: From first vignette answered until the end of the study (up to 12 months).
Completion time per vignette with and without AI support
For each vignette, the platform records the time from vignette opening to answer submission. In the control arm, a single completion time is recorded for each vignette. In the AI-supported arm, completion time is recorded before viewing AI suggestions and again after viewing AI suggestions. The Outcome reports the difference in completion time between study arms, expressed in seconds and calculated across all completed vignettes. Seconds (difference in completion time)
Time frame: From first vignette answered until the end of the study (up to 12 months).
Proportion of assigned vignettes completed
For each participant, the proportion of the 10 vignettes completed within the study period, compared between arms.
Time frame: From first vignette answered until the end of the study (up to 12 months).
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.