This study aims to develop and validate a clinical prediction model for the risk of head and neck cancerous lesions using deep learning combined with AI algorithms, based on multi-center clinical data.
Vocal health has emerged as a prominent public health challenge. Phonation relies on precise neuromuscular and respiratory coordination, a physiological process frequently compromised by systemic senescence, multimorbidity, and neuromuscular degeneration. This complex pathophysiological interplay makes it exceedingly difficult to clinically distinguish early-stage laryngeal malignancies from common benign voice disorders (e.g., vocal fold cysts, vocal process granulomas, and Reinke's edema). Because both entities typically present with non-specific hoarseness or globus sensation, the difficulty of early screening and accurate differential diagnosis is substantially amplified. Currently, the diagnosis of voice disorders relies heavily on laryngoscopy. However, owing to the unequal distribution of medical resources, primary and community care settings generally lack effective screening tools for laryngeal malignancies during initial consultations, often leading to delayed referrals for high-risk patients. Furthermore, there is a profound disparity in endoscopic interpretation expertise across different healthcare tiers. The visual features of certain precancerous lesions (such as dysplastic leukoplakia) and early-stage malignancies overlap considerably, resulting in a high risk of missed diagnoses or unnecessary biopsies of benign lesions. Therefore, systematically incorporating multidimensional indicators-including demographics (e.g., age), smoking and alcohol history, and clinical symptomatology-into risk assessment is crucial for the early detection of malignancies and the optimal allocation of healthcare resources. In recent years, deep learning-based artificial intelligence (AI) has demonstrated tremendous potential in medical image feature extraction, capable of capturing subtle morphological textures imperceptible to the human eye. However, the oncogenesis and progression of laryngeal malignancies are driven by a confluence of multidimensional factors. When confronted with complex, real-world clinical scenarios, unimodal imaging models often suffer from decreased generalizability and elevated false-positive rates due to the absence of the patient's demographic, symptomatic, and behavioral exposure context. Real-world clinical decision-making is not an isolated image-interpretation task; rather, it requires the systematic integration of visual features with multidimensional clinical metadata. Developing an intelligent diagnostic framework capable of fusing multimodal data is therefore essential to overcome the application bottlenecks of current unimodal AI imaging tools. Addressing these clinical pain points and technical limitations, this study leveraged a national multicenter cohort encompassing approximately 11,000 patients with voice disorders to develop and validate a two-stage, multimodal AI risk stratification and diagnostic framework. In the first stage, by integrating demographic characteristics, behavioral exposures, and clinical symptomatology, the investigators developed a non-invasive, low-cost Clinical Screening Model. This tool is designed to provide primary care settings and patients with an immediate, efficient early-warning system for malignancies. In the second stage, building upon this initial risk stratification, the investigators employed deep learning algorithms to extract microscopic visual features from endoscopic images, culminating in a Multimodal Diagnostic Model. This model achieves precise multiclass classification among laryngeal malignancies, common benign vocal fold lesions, and normal laryngeal anatomy. Furthermore, the investigators deployed a cloud-based web application to facilitate real-time risk estimation. Ultimately, by providing this clinical-grade AI diagnostic assistant, this study aims to optimize the hierarchical screening and diagnostic pathways for voice disorders, thereby empowering general practitioners and primary care otolaryngologists to enhance the quality of clinical decision-making and diagnostic accuracy.
Study Type
OBSERVATIONAL
Enrollment
3,000
Nanjing Drum Tower Hospital
Nanjing, Jiangsu, China
Laryngoscopic report diagnosis
those data will be collected via medical history records,"The laryngoscopic report diagnosis" primarily consists of detailed diagnostic classifications for various vocal fold and laryngeal pathologies.
Time frame: During the first outpatient visit (Day 1)
Demographic data
Demographic data primarily includes demographic characteristics, behavioral habits, medical history, and lifestyle factors.
Time frame: During the first outpatient visit (Day 1)
VHI-10
those data will be collected via questionnaire ,Voice-related quality of life was assessed using the Voice Handicap Index (VHI). Total scores range from 0 to 120. Higher scores indicate greater voice-related daily life handicap (worse outcome).
Time frame: During the first outpatient visit(Day 1)
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.