ACQ
Volume 13, Number 3 2011
123
according to the phonotactics of the target language. The
evidence for their reliability with English-Spanish bilingual
speakers in the United States is not yet established. Ellis
Weismer et al. (2000) found supporting evidence whereas
Guttiérrez-Clellen and Simon-Cereijido (2010) concluded
that if this type of testing is to be completed, both
languages need to be assessed and the testing should not
be used to make diagnoses in isolation. Speech Pathology
Australia (2009) similarly recommends the assessment of
both/all of a CALD child’s spoken languages.
The successes or shortcomings of using non-word
stimuli with English-Spanish bilinguals compared with
Indigenous Australian populations cannot be drawn without
complication. For example, the inherently formal nature of
the non-word repetition assessment and its non-meaningful
stimuli (Gould, 2008b) suggests that in an Indigenous
Australian environment, performance is potentially
confounded by contextual cultural bias. A variation of
formal non-word repetition tests was therefore trialled when
assessing language development in an Australian Aboriginal
community (Gould, 2008c). Gould (2008c) describes how
she overcame cultural barriers by designing a non-word
repetition task for use in the aforementioned longitudinal
research project assessing language development of AE
speakers. The trialled assessment is based on the familiar
speech pathology subtest of the Queensland University
Inventory of Literacy (QUIL; Dodd et al., 1996) and the
Sutherland Phonological Awareness Test – Revised
(Neilson, 2003). It is an elegantly designed adaptation of a
non-word test involving the use of 18 phonotactically AE-
relevant non-words (see Gould, 2008c for a full description
of testing methodology). It differs from other non-word
tests; while it requires the child to repeat the non-word,
repetitions are elicited during a play-based activity rather
than during a formal standardised repetition task.
Overall, Gould (2008c) shows that the culturally sensitive
administration of a culturally appropriate assessment
tool helps to: identify contributing reasons for literacy
development difficulties; give qualitative information as to
the nature and severity of difficulties; highlight abilities which
had not been considered or had been ruled out by formal
testing; and identify the need for a hearing assessment.
Clearly this culturally appropriate format of assessment
contributes greatly to an overall picture of a child which
is potentially more accurate than that drawn from formal,
culturally biased assessments.
At this stage, results of such a non-standard assessment
are unable to be compared with norms. Gould (2008c)
suggests that in the absence of norms, data analysis
should be completed in conjunction with Aboriginal
educators/co-workers. When adapting a standardised
test, translation of linguistic stimuli alone is not sufficient
to ensure validity when assessing a CALD child’s
communication abilities (Carter et al., 2005; Speech
Pathology Australia, 2009). Gould (2008c) highlighted the
need for cultural translation and adaptation on a number
of levels including environmental context, test format,
examinee/examiner relationship, recognition of different
learning styles, and recognition of cultural differences
such as “shame”. Gould (1999 cited in Gould, 2008b) also
showed that without accounting for these differences when
testing communication development of Australian Aboriginal
children, standardised tests are likely to result in the over
diagnosis of language impairment.
assessment principles address the potentially confounding
aspects of standard forms of assessment (e.g., culturally
specific question–answer routines). That is, CALD children
who, for example, are not exposed to the direct nature of
western speech pathology style questioning at home, might
be misidentified as language impaired on the basis of
responses that represent cultural difference rather than
language difficulty.
Dynamic assessments incorporate a learning component
into the testing situation in preference to static assessment
administration. The learner’s responsiveness to teaching
is assessed. Test-teach-retest procedures have been
identified as the most suited dynamic approach to SP
assessment and intervention (Guttiérrez-Clellen & Peña,
2001). Such approaches, however, are limited to the
diagnostics of learning impairment and do not necessarily
provide specific information concerning where language
breakdown occurs (Gutiérrez-Clellen et al., 2006). For
example, dynamic
testing
(a shortened version of dynamic
assessment) has been shown by Chaffey, Bailey, and Vine
(2003) to provide valid data regarding high learning potential
in a sample of rural NSW Australian Aboriginal primary
schoolers (grades 3–5). This form of testing proved to be
a more sensitive measure compared to alternative static
cognitive testing, highlighting the potential of dynamic
testing in school assessments.
More recently, Kramer, Mallett, Schneider, and Hayward
(2009) investigated the use of dynamic language
assessments to assess narrative abilities of First Nations,
grade 3 students on the Samson Cree Reserve, Alberta,
Canada. The authors used the Dynamic Assessment
Intervention tool (DAI; Miller, Gillam, & Peña, 2001) that
was designed to minimise social and cultural bias when
assessing language development with CALD children. The
mediated test-teach-retest method was employed to test
oral narrative constructions from wordless storybooks.
Samples were scored according to content (e.g.,
establishment of time and place) and results showed that
the DAI accurately differentiated most typical language
learners from those learners with possible language-
learning difficulties.
Although Kramer et al. (2009) discussed the universality
of the storytelling, the authors did not examine the cultural
validity of the criteria used for scoring the stories. The
cultural validity of scoring needs to be considered in light of
cultural variability. That is, certain semantic features might
have a different significance according to linguistic and/or
cultural membership. This idea is based on the linguistic
relativity hypothesis which suggests that perception is
limited by the language in which we think and speak. For
example, when telling a story, speakers of language X
might preferentially refer to the
place
of an event over
time
of the same event, whereas speakers of language Y might
consider the place far less important than the time. This
does not limit the usefulness of dynamic assessment, but
does remind users of the impact culture and language can
have on interpretation of assessment results.
Novel linguistic stimuli approach
A proposed alternative method of limiting cultural and
linguistic biases in language testing is to use novel stimuli in
assessments. Non-word repetition tasks have been used to
access verbal working memory since with careful
construction, stimuli are not dependent on a participant’s
lexicon (Gathercole, 1995). Stimuli are however dependent
on phonological familiarity and thus must be constructed