ACQ Vol 13 no 3 2011

Table 1. Participant results for microstructure and macrostructure analyses Participant P#1 P#2 P#3 P#4

P#5

P#6

Age

6;6

7;5

7;7

8;7

8;9

9;6

Gender

F 1

M

M

F 3

M

F 3

School year level Home language Microstructure

2

1

3

AE

SAE

AE

AE

AE

AE

NCU NDW MLCU

44 (+0.35) 78 (–0.97) 6.38 (–1.25)

32 (–0.63) 66 (–1.99) 5.50 (–2.57)

23 (–1.48) 45 (–3.01) 6.90 (–1.26)

28 (–1.26) 69 (–1.97) 7.26 (–0.52)

33 (–0.88) 66 (–3.13) 5.72 (–1.82)

89 (+2.06) 163 (+0.04) 6.93 (–1.10)

GA–SAE GA–AE

57% 89%

78% 84%

78% 91%

82% 82%

79% 94%

76% 92%

Macrostructure

Introduction

2 (–0.77) 3 (+0.12) 2 (–0.31) 2 (–1.45) 2 (–1.38) 2 (–1.31) 4 (+1.46) 17 (0.97)

3 (+0.05) 4 (+0.97) 1 (–1.99) 2 (–1.33) 2 (–1.51) 2 (–1.12) 3 (–0.03) 17 (–0.93)

1 (–1.99) 2 (–1.20) 1 (–2.09) 0 (–3.35) 1 (–2.82) 1 (–2.35) 2 (–1.04) 8 (–2.83)

3 (–0.66) 4 (+0.87) 2 (–0.84) 5 (+2.59) 2 (–2.05) 3 (–0.73) 2 (–1.33) 21 (–0.61)

3 (–0.67) 5 (+2.14) 1 (–2.08) 3 (–0.42) 3 (–0.65) 3 (–0.68) 4 (+0.97) 22 (–0.25)

4 (+0.51) 5 (+1.83) 1 (–2.45) 4 (0.97) 4 (0.21) 3 (–0.50) 5 (+1.54)

Narrative Scoring Character development

Scheme (NSS)

Mental States Referencing

Conflict resolution

Cohesion Conclusion Total NSS

26 (+0.27) Notes: NCU = number of C-units; MLCU = mean length of C-unit in words; NDW = number of different words; GA-SAE = grammatical accuracy for Standard Australian English; GA-AE = grammatical accuracy for Aboriginal English; Standard Deviations, compared to the SALT Database (+/- 6 months), are shown in parentheses.

cited in Miller & Iglesias, 2008) then marked and coded according to SALT conventions. Analysis Several measures of microstructure frequently explored in the literature, and shown to be sensitive to age and/or impairment, were selected for analysis: number of C-units (NCU), mean length of C-unit in words (MLCU), number of different words (NDW), and grammatical accuracy (GA). MLCU in words, rather than morphemes, was selected to minimise the effects of reduced noun and verb inflections, which are often a feature of AE. Percentage of grammatical accuracy was calculated by dividing the number of C-units that were grammatically correct by the total number of C-units (Fey et al., 2004; Westerveld & Gillon, 2010). The first GA measure conformed to SAE grammatical expectations (GA-SAE). A second measure, GA-AE, was created to examine the effect of AE on grammatical accuracy. Examples of AE from the participants’ narratives are provided in the appendix. All utterances classified as “grammatically inaccurate” in the first round of analysis were examined for the presence of AE forms. It was then possible to calculate grammatical accuracy percentages that accepted use of AE as grammatically accurate (GA-AE). In order to investigate the appropriateness of available normative data, the microstructure measures were compared to the SALT Narrative Story Retell Reference Database which contains samples from 346 typically developing English-fluent children aged 4;04 to10;00 years, from Wisconsin and California (Miller & Iglesias, 2008). This database was selected as it includes data for the FWAY wordless picture book, and no normative data were available for any Australian children. Grammatical accuracy data for the FWAY narrative were not available in the SALT database so normative comparisons for this measure could not be made. The Narrative Scoring Scheme (NSS) (Heilmann et al., 2010; Miller & Iglesias, 2008) was used to analyse oral

narrative structure since reference data for the FWAY narrative using the NSS are available within the SALT data base. The NSS is scored using a 0–5 point scale for each of seven categories (introduction, character development, mental states, referencing, conflict/resolution, cohesion and conclusion). A score of 0 reflects errors such as not completing/refusing the task. A score of 1 reflects minimal presence of the target features or immature performance, a score of 3 reflects emerging skills and a score of 5 reflects proficient performance. Scores between (i.e., 2 and 4) are undefined and subject to the examiner’s judgment that performance is between the major anchors. Reliability Interrater reliability for key coding and analysis was explored. The first author coded and analysed all written transcripts independently of the second author. Percentage agreement was 96% for bound morpheme agreement and 89% for grammatical accuracy. Reliability for the NSS scores was calculated using Krippendorff’s alpha (Freelon, 2011) for ordinal values. This method accounted for the degree of difference between scorers and the possibility of chance agreement. According to Krippendoff, alpha values above .80 indicate good agreement, values between .67 and .80 are sufficient for tentative conclusions, and values below .67 suggest low reliability. Results for the total score and each component, in order of strength were: Total Score α = .806; Conclusion α = .788; Character Development α = .774; Mental states α = .696; Introduction α = .63; Conflict resolution α = .626; Referencing α = .483; Cohesion α = .403. The lower reliability coefficients for Referencing and Cohesion suggest that the criteria for these measures were more open to interpretation and that scorers need to be clearer about how they apply to the specific narrative under investigation. All differences were resolved by consensus and re-coded as agreed.

128

ACQ Volume 13, Number 3 2011

ACQ uiring Knowledge in Speech, Language and Hearing

Made with