Abstract Book

S65

ESTRO 37

many of these parameters in one imaging session. Although links have been established between these single image parameters and response/outcome, prediction accuracy is seldom high and there is an urgent need to further develop such image-based markers. In addition to having access to large-scale image data, preferably acquired at many institutions, there could be several approaches to advance image-based prediction models. Of these are 1) model-driven and 2) data-driven methods. In model-driven approaches, an hypothesis is formed based on the underlying biological meaning of image data, and the resulting ‘mechanistic’ model is tested against outcome data. One example of such a model could be combining blood flow and cell density- related parameters from MRI, assumed related to hypoxia and tumor burden, respectively. Data-driven methods in the context of imaging and radiotherapy outcome are called Radiomics, and will not be further discussed. The talk will present published and ongoing work on multi- parametric imaging in the context of outcome prediction and response assessment, highlight promising applications and address limitations and challenges.

and responses of the imaging device. With the advent of more and more powerful machine learning (ML) techniques, another strategy consists in brute-force processing of large databases. The challenge here will be to succeed in collecting large amounts of data, ensure that data is representative enough of patient and device variability, while carefully mitigating irrelevant confounding factors and peculiarities that can occur in multicentric studies. If such large databases can be gathered, ML is likely to improve robustness of segmentation against patient and device variability. However, ML remains a fast evolving domain, with still many variants in model architecture and training techniques. Interpretability, too, remains a shortcoming, since most ML techniques involve complicated, generic models, whose many parameters have no direct meaning in the field of application. Figures of merit (FOMs) are yet another aspect of image segmentation that is still debated. FOMs can range from simple volumetric differences to more complicated indicators accounting for correct overlap between ground truth and segmentation results (Dice, Jaccard). Inspired by statistical considerations, indicators like sensitivity and positive predictive value can further assess results by distinguishing between false positives and false negatives. The Hausdorff distance can be used for contours specifically. Tools like STAPLE can also infer a missing ground truth from observed contours. Data and ground truth collection for segmentation validation remains critical as well. Validation can involve numerical simulations, actual phantom images, or patient images. All these provide various compromises between controllability, access to ground truth, acquisition cost, and realism. The size of databases within each institution and heterogeneity across centers, due to differences in devices or protocols, is also a major concern, motivating the need to standardize processes. Initiatives like the AAPM TG 211 on PET automatic segmentation methods are efforts in the right direction, by providing surveys of the domain and benchmarking tools. Contests like the MICCAI PET segmentation challenge provide further opportunities to test new methods on consensus data within a unified framework. Technological advances and increased uniformity in imaging devices, in terms of image quality, are also expected to alleviate some of the above issues. As conclusions, inverse problems, like image recons- truction and segmentation, are intrinsically difficult to solve, due to incomplete information. Many approaches hypothesizing some a priori model or regularization exist, most of them with limited applicability, owing to strong underlying assumptions or specificities of calibration and validation data. Machine learning bears the promise of more generic models with no or milder hypotheses but it is data-intensive, computationally demanding, hardly interpretable, and still in a phase of active methodological development. Finally, segmentation is closely related to even more complicated problems like the assessment of shape and heterogeneity of volumes, which are currently in an early stage of investigation. SP-0132 Challenges and solutions for unsupervised registration D. Hawkes University College London, United Kingdom

Symposium: Why is fully automated image segmentation and deformable image registration not here yet?

SP-0130 Automatic segmentation and deformable registration? setting the stage K. Brock University of Michigan, USA

Abstract not received

SP-0131 Challenges and solutions for unsupervised image segmentation J. Lee 1 1 Université Catholique de Louvain, Box B1-54.07 molecular Imaging- Radiotherapy and Oncology, Brussels, Belgium Abstract text Segmentation of organs or volumes of interest in anatomical and functional images has raised interest for a long time. In the case of PET, for instance, early thresholding methods appeared in the late nineties whereas more complex methods emerged a decade ago. There is now a vast literature on the topic. Segmentation can be considered, like image reconstruction, as an inverse problem, meaning that the acquired data does not contain all necessary information to determine a unique solution. Such intrinsic difficulty explains the numerous attempts and various approaches to solve the problem in the literature, with often a significant sensitivity to calibration parameters and image quality. To address this challenge, segmentation should take benefit of mathematical techniques used to solve inverse and ill-posed problems, like statistical inference and physical models. Image segmentation methods can be categorized in several ways. Methods can be manual or (semi- )automatic, thereby assessing the user’s involvement in the task, with the inevitable human variability it is contaminated with. The user’s expertise, though, which is difficult to formalize and integrate in automatic methods, can compensate for variability in manual segmentation. The optimal balance between these remains an open challenge. Another important distinction concerns method calibration. It can be based on expertise, like explicit analyses of patient population studies, as well as measurements of physical properties

Abstract not received

Made with FlippingBook flipbook maker