ESTRO 2021 Abstract Book

S560

ESTRO 2021

Results Of 624,125 women included, 611,941 had NSL and 12,184 had NSM (Figure 1). The pMarg+ rate was significantly higher for NSM at 4.5% (n=544) than 3.7% for NSL (n=22,449) (p<0.001), and remained significant on MVA (OR 1.13 CI 1.03-1.25, p= 0.012). However, utilization of PORT for pMarg+ was significantly lower on MVA after NSM (OR 0.07 CI 0.06-0.09, p<0.001). Similarly, the pN+ rate was significantly higher for NSM at 22.5% (n=2,740) vs. NSL 13.5% (n=82,288, p<0.001), retaining significance on MVA (OR 1.12 CI 1.06-1.19, p<0.001). PORT with RNI was also utilized less often with pN+ NSM on MVA (OR 0.73, CI 0.67-0.81, p<0.001). Neither high-risk subgroup had differences in OS on MVA when stratified by margin/nodal status and surgical subtype (median f/u 62 months; OR 1.01, CI 0.80-1.28, p=0.93 and median f/u 61 months; OR 0.62, CI 0.30- 1.31, p=0.21, respectively). Conclusion NSM is associated with higher pMarg+ and pN+ rates than NSL, even when correcting for clinical-pathologic confounders. When comparing common pathologic indications for PORT, this analysis suggests PORT is underutilized after NSM. While this dataset is limited in its inability to analyze LR outcomes, our results highlight the need to further refine patient selection for NSM, and the importance of communicating the higher potential for adverse pathologic features requiring PORT to patients undergoing NSM, so that local- regional outcomes are not compromised. PD-0730 Convolutional Neural Network to predict Deep Inspiration Breath Hold eligibility using Chest X- Ray K.S. Chufal 1 , I. Ahmad 1 , M.I. Sharief 1 , A. Dwivedi 2 , R. Bajpai 3 , A.A. Miller 4 , R.L. Chowdhary 1 , K. Bhatia 1 , M. Gairola 1 1 Rajiv Gandhi Cancer Institute & Research Centre, Radiation Oncology, New Delhi, India; 2 Almini Services Limited, Software Architecture, Reading, United Kingdom; 3 Keele University, School of Medicine, Staffordshire, United Kingdom; 4 Illawarra Cancer Care Centre, Radiation Oncology, Wollongong, Australia Purpose or Objective To develop a deep learning Convolutional Neural Network (CNN) to predict which patients with Left-Sided Breast Cancer would be suitable for Deep Inspiration Breath Hold (DIBH) IMRT (using Field-in-Field technique), based on pre-treatment Chest X-Ray (CXR) alone. Materials and Methods All Left-Sided Breast Cancer patients [after surgery and adjuvant chemotherapy] who were considered for DIBH IMRT (FiF) and underwent DIBH assessment were included. Our assessment protocol was: • DIBH assessment (3 days) on Varian RPM, followed by CT Simulation • Acceptance Criteria: Minimum amplitude of 1 cm with variation in amplitude during DIBH up to ±15% of peak amplitude, and; Duration of DIBH ≥ 20s Patients who completed the assessment were labelled as suitable for DIBH or not. We hypothesised that a deep learning CNN model trained on a standard pre-RT CXR (acquired during inspiration breath-hold) would be able to predict which patient would be suitable for DIBH (vs Non-DIBH). Deep Learning methodology 272 sequential patients were eligible and were divided into training [Total(n)=242; DIBH(n)=174; Non- DIBH(n)=68] and testing cohorts([Total (n)=30; DIBH(n)=15; Non-DIBH(n)=15]. DICOM CXR images for all patients were labelled with outcome of DIBH assessment. Image pre-processing & Augmentation In the training cohort, manual pre-processing and augmentation was performed (adjusting the contrast and exposure), resulting in a total of 3124 images [DIBH (n)=1562; Non-DIBH (n)=1562]. These images were further augmented using Image Data Generator (adjusting Rotation, Pixel Shift, Brightness and Shear). Model Building The base 2D model architecture of Visual Geometric Group (VGG-19) was customised as follows: • The dense layers were replaced with a Global Max Pooling layer and an output layer with 2 nodes for prediction • All convolutional layers were trainable except the first two blocks Results CNN Architecture The combination of two optimisers [Adam & Stochastic Gradient Descent (momentum = 0.99)] with different learning rates (Range: 5e -3 to 1e -6 ] was explored to maximise model accuracy and minimise loss. The final network comprised 16 2D convolutional layers in four blocks, each followed by a 2D Max Pooling Layer, utilising SGD with a Learning Rate of 5e -3 .

Made with FlippingBook Learn more on our blog