ESTRO 2021 Abstract Book

S773

ESTRO 2021

terms of architecture and hidden layers. The resulting contours were then compared against the ground truth in terms of Dice similarity coefficient (DSC), mean surface distance (MSD) and other relevant similarity metrics (e.g. Hausdorff Distance, HD95) considering two different splits (70:30 and 50:50) between training and testing set. Results The mean DSC of the different methods were as follows: 0.914 (DL, average of the three considered methods), 0.872 (Siemens software), and 0.887 (atlas) for the 70:30 split, ( Figure 1a ). Similar results were obtained considering the 50:50 split ( Figure 1b ). Likewise, the mean MSDs were 1.93, 2.41, and 2.64 for the U-net, Siemens, and atlas methods, respectively. We also observed that the DL models were more reliable in terms of worst performance (0.854 minimum DSC and 3.91 maximum MSD). Overall, the method with the best median for each index resulted to be Efficient3D ( Table 1 ).

Figure 1

Table 1

Conclusion The present study demonstrates that automated segmentation techniques can provide excellent results and can henceforth be considered mature to be implemented in the medical workflow and research. In particular, the DL-based Efficient3D outperformed the other methods. However, further studies are warranted to evaluate the consequences of automatic contouring in terms of end-user results and their robustness. PD-0931 Deep learning-based tumor segmentation of endoscopy images for rectal cancer patients L. Weishaupt 1 , A. Thibodeau Antonacci 1 , A. Garant 2 , K. Singh 3 , C. Miller 4 , T. Vuong 5 , S.A. Enger 1 1 McGill University, Medical Physics Unit, Department of Oncology, Faculty of Medicine, Montréal, Canada; 2 UT Southwestern Medical Center, Radiation Oncology, Dallas, USA; 3 McGill University Health Centre, Division of Gastroenterology, Montréal, Canada; 4 Jewish General Hospital, Division of Gastroenterology, Montréal, Canada; 5 Jewish General Hospital, Department of Oncology, Montréal, Canada Purpose or Objective The objective of this study was to develop an automated rectal tumor segmentation algorithm from endoscopy images. The algorithm will be used in a future multimodal treatment outcome prediction model. Currently, treatment outcome prediction models rely on manual segmentations of regions of interest, which are prone to inter-observer variability. To quantify this human error and demonstrate the feasibility of automated endoscopy image segmentation, we compare three deep learning architectures. Materials and Methods A gastrointestinal physician (G1) segmented 550 endoscopy images of rectal tumors into tumor and non-tumor regions. To quantify the inter-observer variability, a second gastrointestinal physician (G2) contoured 319 of the images independently. The 550 images and annotations from G1 were divided into 408 training, 82 validation, and 60 testing sets. Three deep learning architectures were trained; a fully convolutional neural network (FCN32), a U-Net, and a SegNet. These architectures have been used for robust medical image segmentation in previous studies. All models were trained on a CPU supercomputing cluster. Data augmentation in the form of random image transformations, including scaling, rotation, shearing, Gaussian blurring, and noise addition, was used to improve the models' robustness. The neural networks' output went through a final layer of noise removal and hole filling before evaluation.

Made with FlippingBook Learn more on our blog