LOCATOR is a multicentre phase II randomised clinical trial that is looking at the process of contouring in radiation treatment for breast cancer patients. This study looks at whether contouring aided by artificial intelligence (AI) is comparable in quality to that of contouring done completely manually by a radiation oncologist. We are also looking at whether AI assisted contouring saves radiation oncologists time when compared to fully manual contouring. LOCATOR uses the LOCATOR software which is an in-house software developed locally and trained on local data.
LOCATOR is a multicentre phase II non-inferiority randomised controlled trial looking at comparing AI assisted contours (with in-house LOCATOR software) against fully manual contouring in breast cancer patients. The primary endpoint is to show non inferiority in grade of AI assisted contouring when compared to fully manual contouring with a poor contour (score \<= 2) as per the MD Anderson Contouring Grade Scale. Secondary endpoints include geometric assessments of contour accuracy, dosimetric differences based on contours, performance (geometric) when compared to commercially available tools as well as economic cost-benefit analysis if in-house AI contouring tools. The study will randomise patients 3:1 to the intervention arm of LOCATOR assisted contours to manual contours. An initial AI contouring model for each tumor type will be trained on contours from 45 previous breast cases using a nnUNetv2 framework. The model will then be iteratively updated every 20-50 patients.
Study Type
INTERVENTIONAL
Allocation
RANDOMIZED
Purpose
TREATMENT
Masking
DOUBLE
Enrollment
444
Initial are generated automatically using software powered by artificial intelligence
Western Cancer Centre Dubbo
Dubbo, New South Wales, Australia
RECRUITINGCentral West Cancer Centre
Orange, New South Wales, Australia
RECRUITINGDepartment of Radiation Oncology, Royal North Shore Hospital
St Leonards, New South Wales, Australia
RECRUITINGAssessment of differences in Contour Quality
To assess the contour quality of fully manual segmentation vs AI assisted segmentation. This assessment will be done using the MD Anderson Cancer Centre five-point likert scale used to validate autosegmentation models ranging from (Strongly disagree to Strongly Agree). The measure will be the proportion of unacceptable contours (as defined by MD Anderson autocontouring score \<= 2) between manual contouring and AI-assisted contouring.
Time frame: 18 months
Assessment of quality of AI assisted contours with and without manual edits
To assess the contour quality of AI assisted contours with and without manual edits. This assessment will be done using the MD Anderson Cancer Centre five-point likert scale used to validate autosegmentation models ranging from (Strongly disagree to Strongly Agree). The measure will be the proportion of unacceptable contours (as defined by MD Anderson autocontouring score \<= 2) between manual contouring and AI-assisted contouring.
Time frame: 18 months
Time Savings
To evaluate the difference in time taken to contour with and without the assistance of an auto-segmentation tool.
Time frame: 18 months
To assess the differences in acute clinician reported toxicity between patients treated with contours assisted by AI contouring versus manual contouring.
Acute clinician reported toxicity will be measured using CTCAE version 5.0 across individual items (see full protocol appendix). For this study, the outcome will be the difference in the proportion of patients with grade≥3 toxicity at any point in time from the start of radiotherapy to 90 days following radiotherapy.
Time frame: 18 months
To assess the differences in late clinician reported toxicity between patients treated with contours assisted by AI contouring versus manual contouring.
Late clinician reported toxicity will be measured using CTCAE version 5.0 across individual items (see full protocol appendix). For this study, the outcome will be the difference in the proportion of patients with grade≥3 toxicity at any point in time between 90 days following radiotherapy and 5 years following radiotherapy.
Time frame: 5 years
To assess the differences in patient reported general acute quality of life outcomes between patients treated with contours assisted by AI contouring versus manual contouring.
General acute patient quality of life outcomes will be measured using the EORTC QLQ-C30 instrument. For this study, the outcome will be the difference in total scores and by domain at any point in time from the start of radiotherapy to 90 days following radiotherapy.
Time frame: 18 months
To assess the differences in patient reported general late quality of life outcomes between patients treated with contours assisted by AI contouring versus manual contouring.
Acute patient reported toxicity will be measured using the EORTC QLQ-C30 and QLQ-BR45. For this study, the outcome will be the difference in total scores and by domain at any point in time between 90 days following radiotherapy and 5 years following radiotherapy.
Time frame: 5 years
To assess the differences in patient reported breast specific acute quality of life outcomes between patients treated with contours assisted by AI contouring versus manual contouring.
Breast specific acute patient quality of life outcomes will be measured using the EORTC QLQ-BR45 instrument. For this study, the outcome will be the difference in total scores and by domain at any point in time from the start of radiotherapy to 90 days following radiotherapy.
Time frame: 18 months
To assess the differences in patient reported breast specific late quality of life outcomes between patients treated with contours assisted by AI contouring versus manual contouring.
Breast specific late patient quality of life outcomes will be measured using the EORTC QLQ-BR45 instrument. For this study, the outcome will be the difference in total scores and by domain at any point in time between 90 days following radiotherapy and 5 years following radiotherapy.
Time frame: 5 years
Assessment of accuracy of AI assisted contours before and after manual edits using surface dice similarity coefficient (sDSC).
To assess accuracy (geometrically) of AI segmentation before and after manual correction. This will be done by comparing the change in surface dice similarity coefficient (sDSC).
Time frame: 18 months
Assessment of accuracy of AI assisted contours before and after manual edits using dice similarity coefficient (DSC).
To assess accuracy (geometrically) of AI segmentation before and after manual correction. This will be done by comparing the change in dice similarity coefficient (DSC).
Time frame: 18 months
Assessment of accuracy of AI assisted contours before and after manual edits using added path length (APL)
To assess accuracy (geometrically) of AI segmentation before and after manual correction. This will be done by comparing the change in APL.
Time frame: 18 months
Assessment of accuracy of AI assisted contours before and after manual edits using mean slice-wise Hausdorff distance (MSHD).
To assess accuracy (geometrically) of AI segmentation before and after manual correction. This will be done by comparing the change in MSHD.
Time frame: 18 months
Assessment of dosimetric differences between patient planned with AI-contours and those planned with manual contours.
To compare the dose volume histogram metrics per contoured structure between patient planned with AI contours and those planned with manual contours.
Time frame: 18 months
Assessment of dosimetric differences in plans optimised on AI assisted contours before and after manual edits.
We will assess dosimetric differences to the clinical tumour volume (CTV), planning target volume (PTV) and organs at risk (OARs) between AI assisted contours before and after manual edits. The measure will be in the proportion of patients who pass all planning constraints as per the FAST FORWARD protocol.
Time frame: 18 months
Assessment of accuracy in contours with an initial and retrained AI model using surface dice similarity coefficient (sDSC).
To assess improvements, if any, in accuracy (geometrically) on contours generated on an initial AI model versus models re-trained on clinical trial data every 20-50 patients. Comparisons will be made using the change in surface dice similarity coefficient (sDSC) when the initially generated AI contour is compared with the final edited contour.
Time frame: 18 months
Assessment of accuracy in contours with an initial and retrained AI model using dice similarity coefficient (DSC).
To assess improvements, if any, in accuracy (geometrically) on contours generated on an initial AI model versus models re-trained on clinical trial data every 50-100 patients. Comparisons will be made using the change in dice similarity coefficient (DSC) when the initially generated AI contour is compared with the final edited contour.
Time frame: 18 months
Assessment of accuracy in contours between different AI systems using surface dice similarity coefficient (sDSC).
To compare the accuracy of an in-house AI segmentation tool (LOCATOR) against commercially available tools on geometric accuracy. Comparisons will be made using the difference in surface dice similarity coefficient (sDSC) with the initially generated AI contours when compared with the final manual contour.
Time frame: 18 months
Assessment of accuracy in contours between different AI systems using dice similarity coefficient (DSC).
To compare the accuracy of an in-house AI segmentation tool (LOCATOR) against commercially available tools on geometric accuracy. Comparisons will be made using the difference in dice similarity coefficient (DSC) with the initially generated AI contours when compared with the final manual contour.
Time frame: 18 months
Assessment of quality in contours between different AI systems
To compare the quality of contours of an in-house AI segmentation tool (LOCATOR) against commercially available tools. This assessment will be done using the MD Anderson Cancer Centre five-point likert scale used to validate autosegmentation models ranging from (Strongly disagree to Strongly Agree). The measure will be the proportion of unacceptable contours (as defined by MD Anderson autocontouring score \<= 2) between manual contouring and AI-assisted contouring.
Time frame: 18 months
Assessment of patient perception and attitudes on AI use in their care
We will perform a brief assessment of patient perception on AI use in their care with a six question survey following their treatment on a five-point likert scale (strongly agree to strongly disagree).
Time frame: 18 months
Economic Cost Benefit Analysis
To perform an economic cost-benefit analysis of using an in-house auto-segmentation (LOCATOR) tool compared to manual segmentation and commercial auto-segmentation systems. This will be done using direct dollar (US and Australian) cost comparisons. Direct costs will be calculated for the LOCATOR system including labor, hardware and maintenance costs for 1 and 3 years. The direct dollar cost for a commercial system will be compared against the overall direct cost of the LOCATOR system. The direct cost of retaining a manual system will be calculated based on the direct cost of extra hours of labor required.
Time frame: 18 months
This platform is for informational purposes only and does not constitute medical advice. Always consult a qualified healthcare professional.