ESTRO 2021 Abstract Book

S586

ESTRO 2021

Left Femoral Head

51.30±6.59

Left Iliac

59.74±7.92

Medullary Canal

39.36±13.39

Penile Bulb

14.54±4.13

Prostate

19.99±7.4

Rectum

106.14±65.5

Right Femoral Head

53.77±5.92

Right Iliac

62.94±7.45

Seminal Vesicle

34.44±8.48

Spinal Cord

34.49±11.67

Entire Patient Body

33.10±7.44

For qualitative assessment, here is an example of an input MRI, the corresponding patient CT and the generated synthetic CT.

.

Conclusion We introduced a first of its kind AI-driven solution for synthetic CT generation that can learn on unpaired scans and is suitable for clinical use thanks to its intensity and structure preserving learning strategy that can generate high quality, sharp synthetic CTs with accurate hounsfield values. Moreover, our approach inherits robustness and good generalization properties through an ensembling principle done on anatomically consistent sub-spaces. PD-0755 Training modality conversion models with small data and its application to MVCT to kVCT conversion S. Ozaki 1 , S. Kaji 2 , K. Nawa 1 , T. Imae 1 , A. Aoki 1 , T. Nakamoto 3 , T. Ohta 1 , Y. Nozawa 1 , A. Haga 4 , K. Nakagawa 1 1 University of Tokyo Hospital, Radiology, Tokyo, Japan; 2 Kyushu University, Institute of Mathematics for Industry, Fukuoka, Japan; 3 Hokkaido University, Devision of Biomedical Engineering and Science, Sapporo, Japan; 4 Tokushima University, Graduate School of Biomedical Science, Tokushima, Japan Purpose or Objective Recently, the demand of modality conversion models has increased in radiotherapy (RT). For instance, MRI to CT conversion is widely investigated in the context of MRI-only based RT. CBCT to planning CT conversion is explored for improving the image quality of CBCT used in IGRT. As conversion models, deep learning based models are usually employed. However, the quality of deep-learning based methods relies heavily on a large amount of data, whose acquisition cost in different modalities is often prohibitive. In this study, we introduce a novel modality conversion model using deep learning that requires only a small amount of unsupervised images, and apply the model to MVCT to kVCT conversion for improving the image quality of MVCT. Materials and Methods Our method is based on generative adversarial networks (GANs) with several extensions. As the extensions, we introduce new loss functions in the training: 1) auto-encoder loss using encoder and decoder in the generators, 2) perceptual losses using the shallow layers of pre-trained VGG networks, 3) discriminator and adversarial losses of latent variables of the generators. These losses encourage to keep the image quality of processed image even with small data. Results We trained our model with several datasets acquired from head and neck cancer patients. The size of the datasets ranges from 16 slices of 2 patients to 2745 slices of MVCT of 137 patients and 2824 slices of kVCT (planning CT) of 98 patients. We investigated data size dependence of our model and observed that the results were well converged even at small data size. Figure 1 shows visual comparison among MVCT, processed MVCT by our model trained with only 16 slices of 2 patients and reference kVCT. In Fig. 1, we see excellent improvements in image quality. Even in the case of 16 slices of training data, noises are largely reduced and the contrast is enhanced, while structures are well preserved. In this study, we suggested several metrics for assessment of the image quality in the absence of the ground truth. Among them, we shows the difference in gradient (DIG) in Fig. 2, where the results of our model and standard CycleGAN model are compared. The DIG is defined by the difference between the values of the Sobel's approximated gradient operating MVCT image and processed MVCT image. The DIG gets larger when there is a change in pixel values of processed image in the gradient domain, which quantitatively measures the structure change. The average value of DIG of our model was 0.0596 ± 0.0053, while the average of standard CycleGAN was 0.0848 ± 0.0078, where the difference is statistically significant with p < 0.001

Made with FlippingBook Learn more on our blog