This study will evaluate whether a web-based artificial intelligence platform (AI) (Clinical Mind AI \[CMAI\] Stanford, CA), can assess communication skills in anesthesiology trainees, including residents and fellows, in the setting of disclosing medical errors. All participants will participate in an AI-generated simulation by using the platform remotely, and CMAI will assess trainee performance immediately after the simulation.
The goal of this study is to evaluate the accuracy of CMAI platform to assess participant performance following a voice-based AI simulation designed to help trainees practice disclosing medical errors. The platform will feature a custom clinical case, created using CMAI's patient creation tool, involving a discussion with the parent of a child who suffered a dental injury that occurred during intubation. The platform will provide an audio-based, simulated encounter with the parent enabling the participant to interact with, and then assess trainee communication performance, followed by the delivery of questionnaires to the trainee to determine usability and satisfaction. A human evaluator will also assess the trainees' performance using the same scales, and the investigators will compare the AI performance evaluation to the human evaluation. This study will allow the investigators to determine: 1. Reliability of the performance assessments of the CMAI platform compared to human raters 2. Usability for the CMAI audio voice model for simulated patient encounters 3. Satisfaction related to an innovative, educational technique By evaluating these domains, we aim to determine the educational value of using simulated voice communication for training in emotionally complex, clinical scenarios.
Study Type
INTERVENTIONAL
Allocation
NA
Purpose
SUPPORTIVE_CARE
Masking
NONE
Enrollment
45
Participants will remotely engage in an AI-generated simulation that presents an voice-based conversation with a simulated parent of a pediatric patient. The scenario involves discussing a clinical scenario concerning their child. Following the simulation, the participant's communication skills will be assessed using an artificial intelligence (AI) system trained to evaluate the key aspects of communication using standardized rating scales. In addition, two human evaluators will assess the participant's communication skills using the same rating scales.
Reliability of AI Conversational Performance Assessment
The primary outcome is to evaluate the reliability of the Clinical Mind AI (CMAI) platform in accurately assessing the communication skills of medical education trainees during a simulated interaction. To measure this outcome, the CMAI will use the Breaking Bad News Assessment Schedule (BBAS), a validated tool assessing participant's communication skills in five domains: 1) Setting the scene, 2) Breaking the news, 3) Eliciting concerns, 4) Informative giving, and 5) Empathy and Support. It includes 17 items, each with sub questions, rated on a 5-point Likert scale from (1-5), where score meanings may vary by each question. The scoring will be guided by Breaking Bad News Rubric, which outlines performance criteria for each item on the BBAS. In addition, two trained human evaluators will independently asses the participant's communication skills using the same BBAS tool and rubric. Lastly, both CMAI and human scores will be compared to determine the reliability of the CMAI platform.
Time frame: Immediately after the simulation
Usability of CMAI Simulation Platform
The perceived usability of the CMAI platform will be evaluated using a usability questionnaire. This questionnaire consists of 14 items, each offering five different response options. Participants will rate each item on a scale from 1 - 5, where score meanings may vary by each question.
Time frame: immediatley after the simulation
Satisfaction of CMAI Simulation
Participant's satisfaction levels will be evaluated using a modified version of the Questionnaire on Satisfaction with Teaching Innovation (QSTI) Survey. The survey consists of five items, each rated on a scale of 1-5, where 1= Strongly Disagree and 5 = Strongly Agree. Higher scores indicate increased satisfaction levels.
Time frame: immediately after the simulation
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.