Table of Contents Table of Contents
Previous Page  20 / 1082 Next Page
Information
Show Menu
Previous Page 20 / 1082 Next Page
Page Background

S7

ESTRO 36 2017

_______________________________________________________________________________________________

quantification of loss-risk-distributions becomes a

necessity. The term “robust optimization” is commonly

used to express that the source of the uncertainty is

directly considered during dose optimization instead of

solving a substitute problem that is construed to yield a

robust result (such as the PTV concept). The method that

currently receives most interest is the worst-case

scenario, which aims to ensure that the target volume

receives a minimum dose under a reasonable number of

displacements and range uncertainties. Although this

approach is greatly superior to the PTV concept for

particle beams, it shares the similarity that it makes no

assumptions about the frequencies with which

uncertainties occur. In other words, all scenarios are

equally likely, which is contrary to intuition (and good

practice!) and increases the price for target dose

robustness in a population. Probabilistic planning adds

the quantification of risks to robust optimization and

therefore requires assumptions about the frequencies of

uncertainties as input. A number of probabilistic

planning concepts have been proposed for photon

therapy, which all rely in some way or other on

approximations and assumptions that will not hold for

particle beams, primarily because the geometric changes

disturb the dose distributions too much. In order to keep

the problems manageable and the number of scenarios

realistically high, new methods for dose evaluation in

terms of loss-risk distributions need to be deployed. New

formulations of the optimization problem can be devised

that take advantage of these distributions. Numerical

complexity is only a secondary issue on current computer

hardware. A primary problem is the dependence on

assumptions about the frequency of uncertainties, and

the patient-individual component of these frequency

distributions. Hence, it is to be expected that reality

demands a restricted use of such approaches.

Symposium: CT imaging, new developments

SP-0022 Current status and potential of dual energy

and spectral CT

J. Andersson

1

1

Norrlands Universitetssjukhus, Department of Radiation

Sciences- Radiation Physics, Umea, Sweden

Conventionally, Computed Tomography (CT) images are

reconstructed with data that has been collected using a

single X-ray tube potential during a scan. CT images are

presented in the Hounsfield scale, which is based on linear

attenuation coefficients that have intrinsic energy

dependence.

A method for Dual Energy Computed Tomography (DECT)

imaging was first proposed in the 1970’s, where two sets

of data are collected with different radiation qualities.

These two data sets are used to draw conclusions about

the material that has been scanned by analysing the

energy dependence of the linear attenuation coefficients.

In recent years, the major CT manufacturers have adopted

DECT and several technological solutions are available on

the market, including CT scanners with two X-ray tubes,

X-ray tubes with rapid tube voltage switching and

detectors with two layers that provide an energy

separation of the incoming photon spectrum.

One of the major benefits of using DECT is that materials

can be viewed in terms of composition and mass density.

Further benefits of DECT include mitigation of specific

artefacts in the CT images using virtual monochromatic

reconstructions. Challenges for the application of DECT

include temporal resolution of the data collection, image

noise, scattered radiation in wide volume scanning and

patient dose considerations. From an engineering

viewpoint these challenges translate to X-ray tube and

detector technology, as well as computer processing

speed and optimization. In radiology DECT has become

routine tool for certain imaging tasks where material

characterization and quantification is important for

diagnostics.

SP-0023 CT image quality: Using a model observer for

clinically relevant optimisation

N. Ryckx

1

, D. Racine

1

, A. Ba

1

, A. Viry

1

, F. Bochud

1

, F.R.

Verdun

1

1

CHUV - Institute of Radiation Physics IRA, Department of

Radiology, Lausanne, Switzerland

Introduction

The number of computed tomography (CT) examinations

has been increasing steadily in the last twenty years, and

this trend does not seem to slow down. For example, the

number of diagnostic CT examinations has increased by

17% between 2008 and 2013 in Switzerland. Furthermore,

the purpose of a CT has been extended to further

diagnostic and/or therapeutic procedures: Attenuation

correction for molecular imaging in nuclear medicine,

interventional CT fluoroscopy procedures and radiation

therapy treatment planning. Recently, the introduction of

iterative reconstruction algorithms allows for a potential

dose reduction by artificially removing image noise.

However, the risk of reducing radiation dose too low is to

potentially remove vital diagnostic information while

conserving the subjective visual aspect (especially in

terms of image noise) of images acquired at higher dose

but without iterative reconstruction. Finally, the usual

image quality metrics (CNR, MTF, NPS) are less pertinent

within the iterative reconstruction paradigm, because

these reconstructions are highly non-linear and non-

stationary.

Materials and methods

We seek to adapt image quality using clinically relevant

metrics. For this purpose, we use model observers, that

are in-silico image observers based on psychophysics and

statistical decision theory. The four cornerstones of a

model observer are the following:

-A clinically relevant task (lesion detection, lesion

localisation, etc.)

-An observer (human or in-silico)

-An adequate set of images, with a representative

statistical fluctuation

-A figure of merit (FOM) to quantify the observer

performance

In our approach, we evaluate CT image quality using a CHO

(channelized Hotelling observer) with dense difference of

Gaussian (DDoG) channels for low-contrast spherical lesion

detection in the abdominal region (e.g. focal liver lesions)

or a NPWE (non-pre-whitening matched filter with eye

filter) observer for high contrast lesion (e.g. renal stone)

detection. The images used for quality assessment are CT

axial slices performed on either an anatomical abdomen

phantom (QRM, Moehrendorf, Germany) with low-contrast

spherical targets of known size (5 and 8 mm diameter) and

contrast (-10 or -20 HU) or a home-made high-contrast

phantom, consisting of three cylinders (diam. 10 cm) of

different materials – Teflon for bone, polyethylene for fat

and PMMA for soft tissue – immersed in a cylindrical water

tank. The FOM used for quality assessment is either the

area under the ROC (receiver operating characteristic)

curve (AUC) or a detectability index d’. These assessments

were performed on 70 CT units in Switzerland.

Results and discussion

For abdominal low contrast targets, no significant

differences in AUC were noted between images

reconstructed by standard filtered back-projection (FBP)

and iterative reconstruction, but the dose-slice thickness

product (DSP, used as a normalised dose metric) varied

from 2.6 to 61 mGy mm, with reconstructed slice

thicknesses ranging from 2 to 5 mm. For the 5mm/20HU

target, 49% of the CT units gathered around an AUC range

of 0.86-0.98 for a DSP range of 5-20 mGy mm.

Nevertheless, 10% of the CTs were outliers because of

relatively high dose levels and limited AUC scores, for