Previous Page  32 / 52 Next Page
Information
Show Menu
Previous Page 32 / 52 Next Page
Page Background

86

ACQ

Volume 13, Number 2 2011

ACQ

uiring Knowledge in Speech, Language and Hearing

and classification. It allows clinicians to make reasoned

judgements about the validity, reliability, sensitivity, and

overall utility of the tool in question, supporting the quest for

evidence based assessment.

While the CADE framework allows evaluation of a tool’s

ability to diagnose or classify a particular disorder, it does

not identify how effective or useful the tool is in directing

goal-setting or treatment. As a result, in practice, clinicians

also require a process for determining which assessment

tools and measures can be used to direct goal-setting and

enhance treatment outcomes (Dollaghan, 2007). Such

targeted research is relatively absent in the evidence based

practice literature and clinicians are again encouraged

to return to their theoretical frameworks to ensure

coherence between their overarching goals and selection of

assessment measures.

Ecological validity

It is widely recognised that performance on standardised

language batteries such as the

Western Aphasia Battery

– Revised

(WAB-R; Kertesz, 2006) and the

Clinical

Evaluation of Language Fundamentals

(4th ed., Australian);

(CELF-4 Australian; Semel, Wiig, & Secord, 2006) does not

reflect real-life communication skills (Apel, 1999; Turkstra et

al., 2005). Chaytor and Schmitter-Edgecombe (2003) state

that ecological validity “refers to the degree to which test

performance corresponds to real world performance” (p.

182). An important distinction should be made between the

content and construct validity of a test and its ecological

validity. In other words, a standardised test may have strong

psychometric properties with little real world relevance.

It is promising that an increasing number of functional

communication measures are being developed in the field.

However, surveys of speech pathology services suggest

that impairment-driven batteries remain the most commonly

used assessments in clinical practice (Verna et al., 2009).

Verna et al. (2009) found that 92.8% of their 70 respondents

routinely used impairment-based language assessments,

while only 21.4% included measures of functional

communication and 2.9% of clinicians completed discourse

analysis. Expert consensus supports a shift in practice,

viewing standardised assessments as “only one component

of an evaluative process that includes multiple sources of

information” (Turkstra et al., 2005, p. 220). As a profession

we need to continue developing and increasing the use of

functional, dynamic assessment tasks to supplement the

data obtained from standardised tests (Turkstra et al., 2005).

Considering client values and

perspectives

Our final, but perhaps most important, point of discussion

requires reflection on the role that client values and

perspectives play in evidence based assessment. Kagan

and Simmons-Mackie (2007) suggest that the selection of

assessment tools should be guided by the “real-life

outcome goals” (p. 309) that are relevant to each

individual client. This approach stands in stark contrast to

the impairment-driven or traditional assessment. The

desired end point is likely to be different for each client

and is expected to change and evolve over time (Kagan &

Simmons-Mackie, 2007). The uniqueness of each

person’s situation highlights the need for a tailored

assessment approach that considers the desired end

point from a functional perspective, with life participation

in mind (Kagan & Simmons-Mackie, 2007).

Kovarsky (2008) presents an interesting discussion on

the use of “personal experience narratives” when

components can be captured, quantified, and then targeted

directly through intervention.

The Living with Aphasia: Framework for Outcome Measure-

ment (A-FROM; Kagan et al., 2008) is a conceptual framework

that builds on the ICF. The four key domains (Severity of

disorder; Participation in life situations; Communication and

language environment; and Personal identity, attitudes and

feelings) are represented as intersecting circles, with the

point of overlap constituting “life with aphasia” or quality of

life (Kagan & Simmons-Mackie, 2007; Kagan et al., 2008).

While the conceptual framework was developed for use

with clients with aphasia, it has potential for use with any

client group or disorder. Routinely used assessment tools

can be mapped on to the domains of the ICF or A-FROM,

to ensure that measurements are holistic and capture

function at each level (Kagan & Simmons-Mackie, 2007;

Kagan et al., 2008; McLeod & Threats, 2008).

Psychometric properties of

assessment tasks

While the ICF and A-FROM provide overarching conceptual

frameworks to guide assessment, an evidence based

practitioner must still consider the validity, reliability, and

psychometric make-up of the individual assessment tools

or methods selected. This can be a daunting and time-

consuming task in clinical practice; however, it is a critical

component of reliable and valid assessment practice.

Evaluation of psychometric properties is particularly

important when assessment is being used to serve

screening or diagnostic purposes. Screening tools aim

to provide a quick and efficient means of identifying

the presence or absence of a disorder while more

comprehensive assessment or diagnostic batteries seek

to profile and classify impairments and provide indices of

severity. It is critical that clinicians consider features such as

the extent to which the test measures what it is designed to

measure (

validity

), whether the test provides representative

sampling of the domain of behaviours (

content validity

),

whether it has strong theoretical and empirical foundations

(

construct validity

), whether its scores are reproducible and

consistent (

reliability

), and whether it has sufficient

sensitivity

and

specificity

to detect the behaviours in question (Tate,

2010; Turkstra et al., 2005). Sensitivity values reflect the

percentage of people with a disorder correctly identified

by a given test or diagnostic procedure according to a

reference standard (Dollaghan, 2007). Specificity values

reflect the percentage of people without the disorder that

are correctly identified as such (Dollaghan, 2007). The small

number of systematic reviews that do exist in the literature

have highlighted that many of the tests and measures

used by speech pathologists have strong content and face

validity (i.e., they are thoughtfully and carefully constructed);

however, the construct validity is often weaker (Turkstra et

al., 2005). Furthermore, many of the screening tools that

are available, such as those for aphasia, provide insufficient

reliability, validity and sensitivity/specificity data to make a

true assessment of their clinical utility (Koul, 2007). These

are again issues that need to be addressed by the field and

considered in practice.

It has been acknowledged that psychometric appraisals

can be difficult and time-consuming for clinicians to

complete in practice, yet there are useful guides available

in the literature. For example, Dollaghan (2007) provides

a practical and useful framework for the critical appraisal

of diagnostic evidence (CADE). It allows the evaluation

of screening tools and standardised batteries designed

specifically for detection of a disorder, differential diagnosis