Subscribe to RSS
DOI: 10.1055/s-0041-1730413
Factors Associated with Speech-Recognition Performance in School-Aged Children with Cochlear Implants and Early Auditory-Verbal Intervention
Abstract
Background Considerable variability exists in the speech recognition abilities achieved by children with cochlear implants (CIs) due to varying demographic and performance variables including language abilities.
Purpose This article examines the factors associated with speech recognition performance of school-aged children with CIs who were grouped by language ability.
Research Design This is a single-center cross-sectional study with repeated measures for subjects across two language groups.
Study Sample Participants included two groups of school-aged children, ages 7 to 17 years, who received unilateral or bilateral CIs by 4 years of age. The High Language group (N = 26) had age-appropriate spoken-language abilities, and the Low Language group (N = 24) had delays in their spoken-language abilities.
Data Collection and Analysis Group comparisons were conducted to examine the impact of demographic characteristics on word recognition in quiet and sentence recognition in quiet and noise.
Results Speech recognition in quiet and noise was significantly poorer in the Low Language compared with the High Language group. Greater hours of implant use and better adherence to auditory-verbal (AV) therapy appointments were associated with higher speech recognition in quiet and noise.
Conclusion To ensure maximal speech recognition in children with low-language outcomes, professionals should develop strategies to ensure that families support full-time CI use and have the means to consistently attend AV appointments.
#
Significant variability exists in the speech recognition abilities of children with cochlear implants (CIs).[1] [2] [3] [4] As summarized in [Table 1], multiple variables influence the speech recognition abilities of children with CIs including age at implantation,[1] [2] duration of CI use,[3] [5] the child's language abilities,[3] [5] and factors related to education/therapy approaches.[4]
Factor |
Authors (year) |
Sample size, age |
Subject description |
Test results and interpretation |
---|---|---|---|---|
Age at implant |
Geers, Brenner, and Davidson (2003)[23] |
N = 181, 8–9 y |
Implanted by 6 y (M: 3; 5 y, SD: 10 mo) |
• 48.3% (SD: 29) on easy version of LNT • 44.2% (SD: 27) on hard version of LNT • Late age at CI may explain poor scores |
Davidson et al (2011)[5] |
N = 112, 15–18 y |
Subset of children from Geers and Brenner (2003) to examine increased duration of CI |
Higher average scores in high school vs. elementary school • 60.1% (SD: 23) on LNT • 80.3% (SD: 27) on BKB-SIN in quiet • 52.0% (SD: 26) on BKB-SIN in noise (+10 dB SNR) |
|
Dettman et al (2016)[2] |
N = 403, 8–10 y |
Range of implantation ages |
CNC score decreases as implant age increases: • 85% at 12 mo; 75% at 13–18 mo • 76% at 19–24 mo; 52% at 25–42 mo • 45% at 43–72 mo |
|
Tajudeen et al (2010)[24] |
N = 110 |
Range of implantation ages |
• LNT mean significantly better if implant by 12 mo compared with 13–24 mo • 13–24 mo better than 25–36 mo (N = 33) • When adjusting for hearing age (mo after implant), no group differences |
|
Language abilities |
Eisenberg et al (2016)[3] |
N = 188, testing at 48-, 60-, and 72-mo postimplant |
Implanted by 5 y (M: 29.4 mo). Enrolled in CDaCI study |
• Linear relationship: HINT-C in quiet and at +10 dB SNR and language scores • Poor HINT-C scores associated with language decrements at 48–72 mo • HINT-C scores ≥ 50% showed improved language scores over time |
Davidson et al (2011)[5] |
N = 112, 15–18 y |
Compared scores from 8–9 y with those as a teen |
• Word and sentence recognition scores increased linearly until a language age 10–11 y |
|
Caldwell and Eisenberg (2013)[25] |
N = 19 normal hearing; N = 27 CI; N = 8 HA |
Age at implant: M = 21 mo (SD = 13); age at test: 81 mo (SD = 5) |
• Age at implant and expressive vocabulary significantly related to speech recognition • Those with typical hearing and CI had similar reductions in speech recognition from the quiet to noise condition |
|
Communication mode |
Dettman et al (2013)[25] |
N = 31 |
23 in auditory–oral;8 in bilingual–bicultural |
• Children educated in auditory-verbal and auditory-only settings had better word and sentence recognition than those in bilingual–bicultural programs |
Geers, Brenner, and Davidson (2003) |
N = 181, 8–9 y |
Implanted by 5 y |
• Children in classrooms focused on listening and spoken language had better speech recognition than those in total communication |
|
Geers et al (2017)[4] |
N = 97 |
Grouped based on sign language exposure |
• Children in families who did not use sign had better speech recognition than those who did |
Abbreviations: BKB-SIN, Bamford–Kowal–Bench sentence recognition; CDaCI, Childhood Development after Cochlear Implantation study; CI, cochlear implant; CNC, consonant–nucleus–consonant word recognition; HA, hearing aid; HINT-C, Hearing In Noise Test – Children; M, mean; LNT, Lexical Neighborhood Test; SD, standard deviation; SNR, signal-to-noise ratio.
In addition to these factors, consistency of daily implant use is critical to successful CI outcomes. Park et al[6] reported better receptive and expressive language outcomes for 3-year-old children who used their CIs during all waking hours as compared with those who used their CIs only part of the day. In fact, full-time device use was a better predictor of language outcomes than age at implantation. Gagnon et al[7] and Easwar et al[8] reported similar findings to support the importance of consistent device use with the latter study finding a significant correlation between daily duration of implant use and monaural speech recognition in quiet.
Study Rationale
The objective of the present study was to explore behavioral and demographic differences in groups of school-aged children with CIs who had low or high scores on a commonly used language test. Separate language groups were defined to examine how demographic variables, including age at CI, age at testing, data logging hours, and percentage of speech–language and audiology appointments attended, support successful speech and language outcomes. Children with lower language scores or inconsistent implant use were hypothesized to have poorer speech recognition outcomes. Findings of this study will be valuable to pediatric hearing health care professionals to better understand the influence of language and demographic factors on the speech recognition of school-aged children with CIs when evaluated with commonly used tests. Moreover, study results will determine how consistent implant use and attendance to audiology and speech–language therapy appointments contribute to variability in speech recognition outcomes.
#
Methods
Subjects
Children with congenital bilateral severe to profound hearing loss and CIs were divided into two groups based on their standard scores from the Core Language scale of the Clinical Evaluation of Language Fundamentals – Fifth Edition (CELF-5).[9] The High Language group had a composite score of 100 or more on the CELF-5, whereas the Low Language group had a composite score of 85 or less on the CELF-5. The CELF-5 was selected because it is commonly used to determine language aptitude in children with hearing loss.[10] Additional inclusion criteria were as follows:
-
At least one CI by 4 years of age
-
Primary communication via listening and spoken language in American English (i.e., limited use of sign language in most daily listening settings)
-
Minimum of 6 hours of CI use per day as indicated by data logging or parent's report for one participant for whom data logging was unavailable.
Exclusion criteria were as follows:
-
No additional disabilities that could induce delays in language development
-
No anatomical abnormalities that could cause delays in language development such as ossification after bacterial meningitis, cochlear nerve deficiency, or significant cochlear deformities.
Licensed speech–language pathologists reviewed the clinical database from one speech and hearing center to identify children who met the inclusion criteria and recruited 26 children who qualified for the High Language group and 24 who qualified for the Low Language group. The demographics of these study participants are provided in [Tables 2] and [3].
Subject |
CI side |
Age (y) |
Age at first HA (mo) |
Age at first CI (mo) |
CELF |
Sound processor R/L |
Hrs data logging |
% therapy attended |
% audiology attended |
---|---|---|---|---|---|---|---|---|---|
1A |
Seq Bil |
15.4 |
19 |
26 |
58 |
Nuc CP1000/CP1000 |
11.4 |
78.2 |
100 |
2A |
Seq Bil |
11.3 |
2 |
15 |
75 |
Nuc CP1000/CP1000 |
13.9 |
73.5 |
50 |
3A |
Seq Bil |
16.0 |
24 |
48 |
84 |
Nuc Freedom/CP910 |
14 |
80.6 |
95.7 |
4A |
Seq Bil |
10.9 |
24 |
26 |
76 |
Nuc CP910/CP910 |
12.9 |
49.2 |
88.9 |
5A |
Seq Bil |
16.7 |
24 |
48 |
58 |
Nuc CP1000/CP1000 |
13.7 |
70.2 |
80 |
6A |
Seq Bil |
16.7 |
24 |
48 |
61 |
Nuc CP1000/CP1000 |
13.9 |
70.2 |
80 |
7A |
Seq Bil |
17.0 |
29 |
33 |
52 |
Nuc CP1000/CP910 |
10.8 |
40.0 |
76.9 |
8A |
Seq Bil[a] |
17.5 |
29 |
50 |
50 |
Nuc CP1000/CP100 |
12.9 |
CNE |
64.7 |
9A |
Left |
8.7 |
10 |
13 |
73 |
NA/Nuc CP1000 |
15.4 |
69.2 |
77.8 |
10A |
Sim Bil |
14.1 |
19 |
22 |
45 |
Nuc CP1000/CP1000 |
14.2 |
CNE |
80 |
11A |
Seq Bil |
15.9 |
12 |
14 |
77 |
Nuc CP910/CP910 |
15 |
85.1 |
86.2 |
12A |
Right |
11.6 |
31 |
39 |
57 |
Nuc CP950/NA |
6 |
54.5 |
83.3 |
13A |
Seq Bil |
17.1 |
17 |
21 |
75 |
Nuc CP1000/Naida Q90 |
15.6 |
86.7 |
87.0 |
14A |
Seq Bil |
14.7 |
8 |
20 |
61 |
Nuc CP910/CP910 |
12 |
25.0 |
74.1 |
15A |
Seq Bil |
12.9 |
19 |
24 |
85 |
Nuc CP1000/CP1000 |
13.2 |
CNE |
93.1 |
16A |
Seq Bil |
13.5 |
2 |
13 |
57 |
Nuc CP910/CP1000 |
12 |
83.3 |
72.2 |
17A |
Seq Bil |
12.2 |
13 |
16 |
76 |
Nuc CP1000/CP1000 |
11.7 |
70.5 |
87.2 |
18A |
Seq Bil |
9.8 |
22 |
26 |
40 |
Nuc CP1000/CP1000 |
CNE |
66.1 |
75.0 |
19A |
Seq Bil |
13.0 |
33 |
40 |
67 |
Nuc CP1000/CP1000 |
9.3 |
59.1 |
86.7 |
20A |
Seq Bil |
13.2 |
2 |
15 |
45 |
Nuc CP910/CP910 |
10 |
68.8 |
84.6 |
21A |
Seq Bil |
16.6 |
1 |
32 |
62 |
Nuc CP1000/CP1000 |
13.5 |
48.7 |
66.7 |
22A |
Sim Bil |
14.1 |
24 |
24 |
70 |
Naida Q70 /Naida Q70 |
12 |
CNE |
87.0 |
23A |
Right |
10.1 |
21 |
33 |
62 |
Nuc CP910/CP910 |
14.6 |
64.5 |
38.9 |
24A |
Right |
14.1 |
4 |
15 |
73 |
Nuc CP910/NA |
12.9 |
CNE |
94.1 |
Mean (SD) |
13.9 (2.6) |
17.2 (10.0) |
27.5 (12.3) |
64.1 (12.5) |
12.6 (2.2) |
65.4 (16.2) |
79.6 (14.0) |
Abbreviations: Bil, bilateral; CELF-5, Clinical Evaluation of Language Fundamentals – Fifth Edition standard score; CI, cochlear implant; CNE, could not evaluate; HA, hearing aid; Hrs, average hours per day; L, left ear; NA, not applicable; Nuc, Nucleus; R, right ear; Seq, sequential; Sim, simultaneous.
a Tested with only left implant due to malfunctioning right processor. Percentage of therapy and audiology refer to the percentage of scheduled visits that were attended.
Subject |
CI Side |
Age (y) |
Age at first HA (mo) |
Age at first CI (mo) |
R/L PTA dB HL |
CELF |
Sound processor R/L |
Hrs data logging |
% therapy attended |
% audiology attended |
---|---|---|---|---|---|---|---|---|---|---|
1B |
Seq Bil |
13.4 |
12 |
25 |
25/22 |
108 |
Nuc CP1000/CP1000 |
13 |
53.1 |
87.5 |
2B |
Seq Bil |
14.8 |
3 |
13 |
22/20 |
100 |
Nuc CP910/CP910 |
15.2 |
85.1 |
93.8 |
3B |
Sim Bil |
10.3 |
9 |
17 |
30/27 |
100 |
Nuc CP950/CP950 |
10.9 |
94.1 |
100.0 |
4B |
Seq Bil |
10.0 |
1.5 |
32 |
27/28 |
100 |
Nuc CP910/CP910 |
14.4 |
96.7 |
93.9 |
5B |
Seq Bil |
12.5 |
16 |
40 |
23/22 |
116 |
Nuc CP910/CP800 |
15 |
88.4 |
88.9 |
6B |
Seq Bil |
13.1 |
13 |
17 |
27/32 |
120 |
Sonnet 2/Sonnet 2 |
12[a] |
CNE |
85.7 |
7B |
Seq Bil |
7.5 |
1 |
13 |
23/23 |
111 |
Nuc CP910/CP910 |
14 |
87.6 |
95.1 |
8B |
Seq Bil |
8.0 |
4 |
41 |
25/22 |
133 |
Nuc CP910/CP910 |
14 |
88.7 |
88.9 |
9B |
Seq Bil |
9.6 |
26 |
30 |
28/27 |
107 |
Nuc CP1000/CP1000 |
14.5 |
89.8 |
96.8 |
10B |
Seq Bil |
8.7 |
1 |
14 |
28/27 |
120 |
Nuc CP1000/CP1000 |
12 |
83.9 |
94.1 |
11B |
Sim Bil |
12.5 |
15 |
28 |
25/28 |
103 |
Naida Q70/Naida Q70 |
12.2 |
CNE |
88.9 |
12B |
Seq Bil |
7.5 |
1 |
12 |
22/23 |
111 |
Nuc CP1000/CP1000 |
12.8 |
93.8 |
100.0 |
13B |
Seq Bil |
9.6 |
16 |
20 |
22/23 |
117 |
Nuc CP910/CP910 |
14.9 |
58.2 |
88.9 |
14B |
Seq Bil |
8.3 |
28 |
30 |
23/23 |
100 |
Nuc CP910/CP910 |
14 |
86.1 |
100.0 |
15B |
Sim Bil |
12.5 |
1 |
10 |
32/28 |
102 |
Nuc CP1000/CP1000 |
14 |
72.4 |
92.9 |
16B |
Seq Bil |
9.3 |
3 |
10 |
28/33 |
120 |
Nuc CP1000/CP1000 |
14 |
100.0 |
|
17B |
Seq Bil |
14.5 |
3 |
10 |
22/25 |
108 |
Nuc CP1000/CP1000 |
12.7 |
89.7 |
75.0 |
18B |
Seq Bil |
11.2 |
3 |
13 |
23/22 |
100 |
Nuc CP910/CP910 |
13 |
CNE |
94.4 |
19B |
Seq Bil |
15.3 |
2 |
35 |
27/22 |
106 |
Nuc CP910/CP910 |
13 |
86.4 |
90.6 |
20B |
Sim Bil |
10.4 |
2 |
14 |
22/27 |
111 |
Nuc CP910/CP910 |
11.8 |
CNE |
82.4 |
21B |
Seq Bil |
14.3 |
1.5 |
13 |
25/23 |
120 |
Nuc CP950/CP950 |
13.2 |
89.7 |
95.8 |
22B |
Seq Bil |
16.0 |
2 |
12 |
28/27 |
132 |
Naida Q70/Naida Q70 |
14 |
92.0 |
76.5 |
23B |
Seq Bil |
16.0 |
1 |
22 |
25/25 |
100 |
Nuc CP1000/CP1000 |
CNE |
85.7 |
88.2 |
24B |
Seq Bil |
14.0 |
10 |
34 |
27/22 |
106 |
Nuc CP910/CP910 |
14 |
94.7 |
80.0 |
25B |
Sim Bil |
8.0 |
0.75 |
9 |
27/28 |
109 |
Nuc CP1000/CP1000 |
12.2 |
83.3 |
92.6 |
26B |
Sim Bil |
11.2 |
12 |
15 |
22/18 |
111 |
Nuc CP910/CP910 |
13 |
81.6 |
97.1 |
Mean (SD) |
11.5 (2.8) |
7.2 (7.9) |
20.3 (10.1) |
25/25 (3/4) |
110.4 (9.5) |
13.4 (1.1) |
84.8 (11.1) |
91.1 (7.0) |
Abbreviations: Bil, bilateral; CELF, Clinical Evaluation of Language Fundamentals – Fifth Edition standard score; CI, cochlear implant; CNE, could not evaluate; HA, hearing aid; Hrs, average hours per day; L, left ear; Nuc, nucleus; R/L PTA, right and left ear pure tone average at 500, 1,000, and 2,000 Hz with the CIs; R, right ear; Seq, sequential; Sim, simultaneous.
a Hours of CI use were estimated because data logging records were unavailable. Percentage of therapy and audiology refer to the percentage of scheduled visits that were attended.
#
Study Design and Test Measures
This study included a review of patient records and a series of behavioral measures approved by the Western Institutional Review Board. Demographic variables were collected through retrospective chart review and included: chronological age at test, age at implantation, age at first hearing aid, percentage of auditory-verbal (AV) therapy appointments kept, percentage of audiology appointments kept, and daily data logging information.
As recommended by the working group that developed the Pediatric Minimum Speech Test Battery (PMSTB) protocol,[11] word recognition in quiet was evaluated with the consonant–nucleus–consonant (CNC) test[12] at a presentation level of 60 dBA (decibels A-weighted) in each unilateral CI condition and also in the bilateral CI condition, when applicable. Although the PMSTB suggests the use of the BabyBio or AzBio for speech recognition in noise, the AzBio was selected to avoid ceiling effects in quiet and noise that occur in some 5- to 6-year-old children.[13] [14] [15] [16] Sentences were presented at 60 dBA in quiet and at two signal-to-noise ratios (SNRs) in multitalker babble with speech at 65 dBA and babble at 55 dBA (+10 dB SNR) or babble at 60 dBA (+5 dB SNR). AzBio sentence recognition testing was only completed in the bilateral CI condition for bilateral users and in the unilateral condition for the bimodal and unilateral CI users. The hearing aid was removed for all testing, and the nonimplanted ear was occluded with a foam ear plug.
Additionally, to ensure each group had sufficient and similar audibility of the speech stimuli presented in this study, all the children's aided sound-field detection thresholds for warble tones at octave frequencies from 250 to 6,000 Hz were measured for each implanted ear using a modified Hughson–Westlake method-of-limits procedure. Warble tones were delivered from a Grason Stadler Industries (GSI) 61 audiometer and presented from a GSI sound-field loudspeaker located 1 m directly in front of the participant (0 degree azimuth) while the participants used their CIs.
#
#
Results
Sample Characteristics Differentiating Low and High Language Groups
As shown in [Tables 2] and [3], average demographic characteristics between groups differed for some variables, and statistical analyses with independent samples t-tests (two-tailed) yielded several significant findings. First, as expected given the group cutoff score, the Low Language group exhibited poorer CELF scores than children in the High Language group (t[48] = −14.6, p < 0.001), a large difference of ∼46 points. In addition, children in the Low Language group were older than children in the High Language group (t[48] = 3.1, p = 0.003).
The Low Language group was fitted with hearing aids at a later age than children in the High Language group (t[48] = 3.9, p < 0.001), a difference of ∼10 months. The Low Language group also had a later age at first CI than children in the High Language group (t[48] = 2.3, p = 0.028) by ∼7 months. CI experience (i.e., age at testing − age at implant) also was significantly different between the Low Language (mean [M] = 11.6 years; standard deviation [SD] = 2.3) and High Language groups (M = 9.7 years; SD = 2.9) groups (t[48] = 2.4, p = 0.019). In the children with bilateral implants, there was no significant group difference in the time interval between implants (t[45] = 0.2, p = 0.814).
Duration of daily implant use (∼13 hours per day) was not significantly different between the groups (t[46] = −1.4, p = 0.161). The High Language group had more assiduous attendance to both AV therapy (t[38] = −4.5, p < 0.001) and audiology appointments (t[48] = −3.7, p = 0.001). Finally, aided sound-field warble tone threshold data were analyzed with a repeated measures analysis of variance (ANOVA). There was no significant main effect of group (F[1, 570] = 0.96, p = 0.33) or ear (F[1, 570] = 0.24, p = 0.63), suggesting similar hearing thresholds for the two groups.
#
Speech Recognition of the Low and High Language Groups
Average per cent-correct speech recognition performance in the CNC and AzBio test conditions is shown in [Figs. 1] and [2], respectively, and individual data are provided in [Appendices A] and [B]. Given that some participants achieved ceiling effects and data from some of the test conditions were not normally distributed according to a Shapiro–Wilk's test, all data were arcsine transformed prior to analysis.
Abbreviations: CI, cochlear implant; CNC, consonant–nucleus–consonant; SD, standard deviation.
Note: Period (.) indicates missing data due to unilateral implantation.
Abbreviations: CI, cochlear implant; CNC, consonant–nucleus–consonant; SD, standard deviation.
#
Word Recognition Results
The CNC data ([Fig. 1]) were analyzed with three separate Kruskal–Wallis' nonparametric tests to compare the scores in the two groups because, even after the data were acrsine transformed, several conditions had nonnormal distributions. Ten of the data points were missing due to the four unilateral participants and missing scores from one participant in the Low Language group (8A). These analyses suggested significantly higher word recognition for the High Language group in the right ear (H[1] = 7.3, p < 0.01), left ear (H[1] = 18.8, p < 0.0001), and bilateral condition (H[1] = 20.3, p < 0.0001).
#
Sentence Recognition Results
Data in the AzBio conditions ([Fig. 2]) yielded normal distributions after the arcsine transform and were analyzed with a repeated measures ANOVA. This analysis yielded a significant main effect of language group (F[1, 149] = 49.3, p < 0.0001) and a significant main effect of test condition (F[2, 149] = 83.5, p < 0.0001) with no significant interaction effect between language group and test condition (F[2, 149] = 0.54, p > 0.05). Post hoc analyses with the Bonferroni's test revealed significant differences between the groups with the High Language group showing higher scores across conditions. In addition, significant differences were found across all conditions with best scores in the quiet condition followed by the +10 dB SNR and +5 dB SNR conditions ([Fig. 2]).
#
Intervention and Demographic Factors Associated with Speech Recognition
Separate linear mixed effects regression analyses were performed to examine whether (1) performance on the CELF-5 is predictive of best CNC word recognition in quiet and AzBio sentence recognition in quiet and noise or (2) adherence to programming and therapy schedules affects speech recognition outcomes in pediatric CI recipients. In all analyses, regression assumptions were met, and all variables were entered simultaneously. Statistical analyses are summarized in [Tables 3] and [4], including beta and significance values.
Variable |
CNC (R 2 = 0.60) |
AzBio quiet (R 2 = 0.79) |
AzBio + 10 SNR (R 2 = 0.77) |
AzBio + 5 SNR (R 2 = 0.75) |
||||
---|---|---|---|---|---|---|---|---|
β |
F |
β |
F |
β |
F |
β |
F |
|
Intercept |
– |
5,276.8[a] |
– |
3,386.1[a] |
– |
1,896.6[a] |
– |
782.4[a] |
Age at test |
0.18 |
3.2 |
0.16 |
0.07 |
0.19 |
1.03 |
0.18 |
1.3 |
Age at first CI |
0.09 |
0.53 |
0.05 |
1.29 |
0.06 |
0.17 |
0.09 |
2.1 |
Age at HA |
−0.15 |
5.4[a] |
0.03 |
3.9[a] |
−0.06 |
6.9[a] |
−0.17 |
7.7[a] |
CELF-5 score |
0.11 |
10.4[a] |
0.5 |
64.2[a] |
0.56 |
45.9[a] |
0.68 |
37.12[a] |
% therapy attended |
0.05 |
10.8[a] |
0.07 |
36.9[a] |
0.15 |
36.4[a] |
0.26 |
34.9[a] |
% audiology attended |
0.27 |
3.3 |
0.1 |
0.40 |
0.3 |
1.06 |
0.29 |
0.6 |
Data logging hours |
2.1 |
10.4[a] |
0.46 |
0.41 |
2.8 |
6.7[a] |
3.04 |
4.6[a] |
Abbreviations: CI, cochlear implant; CNC, consonant–nucleus–consonant; HA, hearing aid; CELF-5, Clinical Evaluation of Language Fundamentals – Fifth Edition; SNR, signal-to-noise ratio.
a p < 0.05.
In the regression models, CI recipient was treated as a random effect using a random intercept to control for baseline differences across pediatric patients. Age at first hearing aid (in months), age at first CI (in months), and age at test (in months) were included as block variables in all models to control for auditory experience and developmental factors known to contribute to speech recognition outcomes (Davidson et al, 2019[21]). CELF-5 language score, percentage of AV therapy and percentage of audiology appointments kept, and data logging hours ([Table 4]) were also included as fixed effects.
#
CNC Regression Results
Regression results predicting best CNC score are displayed in [Table 4]. When controlling for age at test, age at first hearing aid, and age at first CI, CELF-5 language score and data logging hours were significant predictors of CNC scores in quiet. Results indicate that CNC scores are expected to increase by 0.11% for every unit increase in CELF score ([Fig. 3]). Likewise, CNC scores are expected to increase by 2.1% for every additional hour of processor usage time.
Examination of the CNC scores in each language group suggests that children in the High Language group perform near ceiling on the measure ([Fig. 3]). Thus, post hoc regression analyses were performed on CNC scores separately for children in the Low Language and High Language groups with only CELF-5 score as a fixed effect in the models. Results suggest CNC scores significantly increased by 0.58% per CELF-5 unit in the Low Language group, while CELF-5 had no significant effect (0.01% change in CNC score per CELF-5 unit) in the High Language group ([Table 5]).
Low Language group |
CNC ( R 2 = 0.55) |
AzBio quiet ( R 2 = 0.53) |
AzBio + 10 SNR ( R 2 = 0.55) |
AzBio + 5 SNR ( R 2 = 0.49) |
||||
---|---|---|---|---|---|---|---|---|
Variable |
β |
F |
β |
F |
β |
F |
β |
F |
Intercept |
– |
2,230.7[a] |
– |
342.58[a] |
– |
276.8[a] |
– |
146.4[a] |
CELF-5 score |
0.58 |
14.94[a] |
1.44 |
23.2[a] |
1.5 |
26.02[a] |
1.4 |
19.9[a] |
High Language group |
CNC ( R 2 = 0.05) |
AzBio quiet ( R 2 = 0.04) |
AzBio + 10 SNR ( R 2 = 0.05) |
AzBio + 5 SNR ( R 2 = 0.0002) |
||||
Variable |
β |
F |
β |
F |
β |
F |
β |
F |
Intercept |
– |
26,376[a] |
– |
15,786.4[a] |
– |
4,864.7[a] |
– |
1,192.4 |
CELF-5 score |
0.001 |
1.25 |
0.04 |
1.02 |
0.08 |
1.12 |
0.01 |
0.005 |
Abbreviations: CI, cochlear implant; CNC, consonant–nucleus–consonant; HA, hearing aid; CELF-5, Clinical Evaluation of Language Fundamentals – Fifth Edition; SNR, signal-to-noise ratio.
a p < 0.05.
#
AzBio Regression Results
Linear mixed effects regression results predicting best AzBio score in quiet and noise are displayed in [Table 4]. In quiet, when controlling for age at test, age at first hearing aid, and age at first CI, CELF-5 language score was a significant predictor of AzBio sentence scores. Sentence recognition scores in quiet are expected to increase by 0.5% for every unit increase in CELF score ([Fig. 4A]). Data logging hours were not a significant predictor of AzBio scores in quiet. However, the regression results indicate scores in quiet would increase by 0.41% for every additional hour of wear time.
In both noise conditions, when controlling for age at test, age at first hearing aid, and age at first CI, the following variables were significant predictors of sentence recognition in noise scores: CELF-5 language score, percentage of AV therapy, and data logging hours. For the +10 ([Fig. 4B]) and +5 dB ([Fig. 4C]) SNR conditions, AzBio scores are expected to increase by 0.56 and 0.68% for every one unit increase in CELF unit, respectively. Sentence in noise scores are predicted to increase by 3% for every additional hour of wear time ([Fig. 5A]). Finally, higher percentage of therapy appointments kept is also estimated to produce higher AzBio in noise scores ([Fig. 5B]).
Similar to the CNC analysis, post hoc linear mixed effects analyses were performed on each language group separately to examine more closely the increase in AzBio sentence scores in quiet and noise as a function of CELF unit. In the separate Low Language group analyses, the rise in AzBio score per CELF-5 unit were as follows: 1.44% per CELF-5 unit in quiet, 1.5% per CELF-5 in +10 dB SNR, and 1.4% per CELF-5 unit in +5 dB SNR ([Table 5]). In the Low Language group, CELF-5 scores ranged from 40 to 85 which is associated with approximately 65% increase in AzBio score over this range, regardless of test condition (quiet vs. noise). In contrast, for the High Language group, CELF-5 score had practically no effect on AzBio score in quiet (+0.04% per CELF-5 unit), + 10 dB SNR (+0.08% per CELF-5 unit), or +5 dB SNR (+0.01% per CELF-5 unit) ([Table 5]).
#
#
Discussion
This study compared speech recognition in quiet and noise in children with higher and lower language scores and explored how demographic impacted performance. Overall, significant group differences were found for all speech recognition conditions ([Figs. 1] and [2]). In particular, the children in the High Language group showed resistance to the presence of competing noise as evidenced by their high average sentence recognition at both the +10 and +5 dB SNRs. Conversely, the sentence recognition of the children in the Low Language group decreased considerably in the presence of noise. In addition to the language differences, these findings may be related to the earlier age at first hearing aid, duration of deafness, and earlier age at implantation (i.e., age at first CI) in the High Language group. For the Low Language group, the average age of hearing aid fitting was 10 months later and age at first CI was 7 months later. This longer period of auditory deprivation during the critical period of language development may have resulted in greater speech-in-noise processing deficits. Alternatively, it is possible that children whose implant allowed them to perceive speech in the presence of noise helped them to develop better language.
Regarding the language disparities between the groups, Caldwell and Nittrouer (2013)[22] and Davidson et al[5] reported higher word recognition scores for pediatric implant recipients with higher language abilities. Davidson et al (2011) also found that children with CIs experienced a subceiling plateau in their word recognition scores at a language age of 10 to 11 years. Similarly, in the present study, children with poorer word and sentence recognition performance in quiet also had poorer CELF-5 scores even though Low Language group had a longer average duration of implant use ([Table 2]).
Children in the High Language group attended significantly more AV therapy appointments (85 vs. 66% attendance) and audiology appointments (91 vs. 80% attendance) than the children in the Low Language group. Although audiology appointments attendance was not a significant predictor, AV therapy attendance significantly predicted speech recognition performance in all testing conditions. As the High Language group had higher AV therapy attendance, the speech recognition results could be a byproduct of group or an associated variable such as family support, socioeconomic status, richness of language environment at home, and participation in other types of early invention. While this study cannot determine if higher AV therapy attendance rates lead to better language outcomes, it is important not to discount the positive effects parental involvement (i.e., higher attendance of therapy and programming appointments) has in the hearing habilitation process.
Average hours of implant use per day (data logging) was a significant predictor of word recognition and of sentence recognition in noise, with longer usage predicting better outcomes. Data logging records obtained from the participants' most recent audiology appointments indicated that the children in the Low and High Language groups used their CIs for a similar number of hours of per day ([Tables 2] and [3]). As the two language groups did not significantly differ in processor wear time, daily use of the CI appears to be a factor independent of language group allocation. Although no group difference was found in the present study, Busch et al[17] and Park et al[6] reported better receptive vocabulary and language abilities, respectively, for children whose data logging records indicated more hours of CI use during their early years of life. However, it should be noted that the impact of individual wear time hours ([Tables 2] and [3]) varies across patients and likely relates to other demographic factors.
Additionally, it should be noted that differences in speech recognition between the children in the Low Language group versus those in the High Language group may be due to greater use of bilateral CIs by the High Language group. Specifically, five of the children in the Low Language group were tested with use of only a unilateral CI, whereas all the children in the High Language group used bilateral CIs. Previous research has shown better speech recognition in quiet and in noise with the use of bilateral CIs relative to unilateral CI use.[18] Furthermore, six of the children in the High Language group were simultaneously implanted, whereas only three of the bilateral CI users from the Low Language group were simultaneously implanted. Previous studies have found better speech recognition in noise for children who receive bilateral CIs in a simultaneous procedure relative to those who receive bilateral CIs in sequential procedures, particularly when there is a longer delay between implantation of the two ears (e.g., more than 12 months elapses between implantation of first and second implanted ears).[18] [19] [20] It should, however, be noted that there was not a statistically significant difference between the Low Language and High Language groups in the mean time interval between implantation of the first and second ears.
Limitations to this study are related primarily to the ceiling effects measured in the quiet test conditions in the High Language group. As a result, group differences may be even larger than could be measured in the present study. Also, data were missing for some participants (e.g., hours of CI use, percentage of AV therapy sessions attended, etc.). Other limitations relate to small sample size and demographic differences between the two groups. We analyzed percentage of AV therapy appointments attended because it likely relates to the family's adherence to intervention recommendations, although this may not be a perfect predictor of family support. Additional research is needed to explore the relationship of intervention dosage and CI outcomes.
Clinical Implications
The results of this study are relevant to all professionals who serve children with CIs because they highlight the importance of language ability, consistent CI use, and participation in AV therapy. A team approach, including the family, will ensure that all the necessary counseling, recommendations, and therapies are provided to the child to ensure optimal outcomes. The team may need to consider individualized accommodations to support success such as a child-focused reward system for consistent CI use, transportation to and from appointments, and educational accommodations (e.g., remote microphone technology). Additionally, performance on commonly used sentence recognition-in-noise tests are influenced by language aptitude, with poorer performance observed for school-aged children with low language aptitude.
#
#
Conclusion
Children's language abilities and demographic factors explain significant variability in speech recognition in quiet and noise outcomes in children with CIs. Factors associated with speech recognition include language aptitude, attendance to AV therapy appointments, and consistent use of the CI during all waking hours.
#
#
Conflict of Interest
None declared.
Acknowledgement
This work was supported by a grant from the Oberkotter Foundation. We would like to thank Teresa Caraway and Bruce Rosenfield for their support and would also like to thank the reviewers of this manuscript for their helpful comments regarding an earlier version of this paper.
Disclaimer
Any mention of a product, service, or procedure in the Journal of the American Academy of Audiology does not constitute an endorsement of the product, service, or procedure by the American Academy of Audiology.
-
References
- 1 Ching TYC, Dillon H, Leigh G, Cupples L. Learning from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study: summary of 5-year findings and implications. Int J Audiol 2018; 57 (sup2): S105-S111
- 2 Dettman SJ, Dowell RC, Choo D. et al. Long-term communication outcomes for children receiving cochlear implants younger than 12 months: a multicenter study. Otol Neurotol 2016; 37 (02) e82-e95
- 3 Eisenberg LS, Fisher LM, Johnson KC, Ganguly DH, Grace T, Niparko JK. CDaCI Investigative Team. Sentence recognition in quiet and noise by pediatric cochlear implant users: relationships to spoken language. Otol Neurotol 2016; 37 (02) e75-e81
- 4 Geers AE, Mitchell CM, Warner-Czyz A, Wang NY, Eisenberg LS. CDaCI Investigative Team. Early sign language exposure and cochlear implantation benefits. Pediatrics 2017; 140 (01) e20163489
- 5 Davidson LS, Geers AE, Blamey PJ, Tobey EA, Brenner CA. Factors contributing to speech perception scores in long-term pediatric cochlear implant users. Ear Hear 2011; 32 (1, Suppl): 19S-26S
- 6 Park LR, Gagnon EB, Thompson E, Brown KD. Age at full-time use predicts language outcomes better than age of surgery in children who use cochlear implants. Am J Audiol 2019; 28 (04) 986-992
- 7 Gagnon EB, Eskridge H, Brown KD. Pediatric cochlear implant wear time and early language development. Cochlear Implants Int 2020; 21 (02) 92-97
- 8 Easwar V, Sanfilippo J, Papsin B, Gordon K. Impact of consistency in daily device use on speech perception abilities in children with cochlear implants: datalogging evidence. J Am Acad Audiol 2018; 29 (09) 835-846
- 9 Wiig EH, Semel E, Secord WA. Clinical Evaluation of Language Fundamentals – Fifth Edition (CELF-5). Bloomington, MN: NCS Pearson; 2013
- 10 Geers AE, Moog JS, Rudge AM. Effect of frequency of early intervention on spoken language and literacy levels of children who are deaf or hard of hearing in preschool and elementary school. J Early Hear Detect Interv 2019; 4 (01) 15-27
- 11 Uhler K, Warner-Czyz A, Gifford R, Working Group P. Pediatric minimum speech test battery. J Am Acad Audiol 2017; 28 (03) 232-247
- 12 Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Disord 1962; 27: 62-70
- 13 Holder JT, Sheffield SW, Gifford RH. Speech understanding in children with normal hearing: sound field normative data for BabyBio, the BKB-SIN, and QuickSIN. Otol Neurotol 2016; 37 (02) e50-e55
- 14 Spahr AJ, Dorman MF, Litvak LM. et al. Development and validation of the pediatric AzBio sentence lists. Ear Hear 2014; 35 (04) 418-422
- 15 Spahr AJ, Dorman MF, Litvak LM. et al. Development and validation of the AzBio sentence lists. Ear Hear 2012; 33 (01) 112-117
- 16 Wolfe J, Neumann S, Schafer E, Marsh M, Wood M, Baker RS. Potential benefits of an integrated electric-acoustic (EAS) sound processor with children: a preliminary report. J Am Acad Audiol 2017; 28 (02) 127-140
- 17 Busch T, Vermeulen A, Langereis M, Vanpoucke F, van Wieringen A. Cochlear implant data logs predict children's receptive vocabulary. Ear Hear 2020; 41 (04) 733-746
- 18 Sharma SD, Cushing SL, Papsin BC, Gordon KA. Hearing and speech benefits of cochlear implantation in children: a review of the literature. Int J Pediatr Otorhinolaryngol 2020; 133: 109984
- 19 Chadha NK, Papsin BC, Jiwani S, Gordon KA. Speech detection in noise and spatial unmasking in children with simultaneous versus sequential bilateral cochlear implants. Otol Neurotol 2011; 32 (07) 1057-1064
- 20 Gordon KA, Papsin BC. Benefits of short interimplant delays in children receiving bilateral cochlear implants. Otol Neurotol 2009; 30 (03) 319-331
- 21 Davidson LS, Geers AE, Uchanski RM. et al. Effects of early acoustic hearing on speech perception and language for pediatric cochlear implant recipients. Journal of Speech, Language, and Hearing Research 2019; 62 (09) 3620-3637
- 22 Caldwell A, Nittrouer S.. Speech perception in noise by children with cochlear implants. Journal of Speech, Language, Hearing Research 2013; 56: 13-30
- 23 Geers A, Brenner C, Davidson L. Factors associated with development of speech perception skills in children implanted by age five. Ear and Hearing 2003; 24: 24S-35S
- 24 Tajudeen BA, Waltzman SB, Jethanamest D, Svirsky MA. Speech perception in congenitally deaf children receiving cochlear implants in the first year of life. Otology & Neurotology 2010; 31: 1254-1260
- 25 Dettman S, Wall E, Constantinescu G, Dowell R. Communication outcomes for groups of children using cochlear implants enrolled in auditory-verbal therapy, aural-oral, and bilingual-bicultural early intervention programs. Otology and Neurotology 2013; 34: 451-459
Address for correspondence
Publication History
Received: 08 February 2021
Accepted: 22 March 2021
Article published online:
30 November 2021
© 2021. American Academy of Audiology. This article is published by Thieme.
Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA
-
References
- 1 Ching TYC, Dillon H, Leigh G, Cupples L. Learning from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study: summary of 5-year findings and implications. Int J Audiol 2018; 57 (sup2): S105-S111
- 2 Dettman SJ, Dowell RC, Choo D. et al. Long-term communication outcomes for children receiving cochlear implants younger than 12 months: a multicenter study. Otol Neurotol 2016; 37 (02) e82-e95
- 3 Eisenberg LS, Fisher LM, Johnson KC, Ganguly DH, Grace T, Niparko JK. CDaCI Investigative Team. Sentence recognition in quiet and noise by pediatric cochlear implant users: relationships to spoken language. Otol Neurotol 2016; 37 (02) e75-e81
- 4 Geers AE, Mitchell CM, Warner-Czyz A, Wang NY, Eisenberg LS. CDaCI Investigative Team. Early sign language exposure and cochlear implantation benefits. Pediatrics 2017; 140 (01) e20163489
- 5 Davidson LS, Geers AE, Blamey PJ, Tobey EA, Brenner CA. Factors contributing to speech perception scores in long-term pediatric cochlear implant users. Ear Hear 2011; 32 (1, Suppl): 19S-26S
- 6 Park LR, Gagnon EB, Thompson E, Brown KD. Age at full-time use predicts language outcomes better than age of surgery in children who use cochlear implants. Am J Audiol 2019; 28 (04) 986-992
- 7 Gagnon EB, Eskridge H, Brown KD. Pediatric cochlear implant wear time and early language development. Cochlear Implants Int 2020; 21 (02) 92-97
- 8 Easwar V, Sanfilippo J, Papsin B, Gordon K. Impact of consistency in daily device use on speech perception abilities in children with cochlear implants: datalogging evidence. J Am Acad Audiol 2018; 29 (09) 835-846
- 9 Wiig EH, Semel E, Secord WA. Clinical Evaluation of Language Fundamentals – Fifth Edition (CELF-5). Bloomington, MN: NCS Pearson; 2013
- 10 Geers AE, Moog JS, Rudge AM. Effect of frequency of early intervention on spoken language and literacy levels of children who are deaf or hard of hearing in preschool and elementary school. J Early Hear Detect Interv 2019; 4 (01) 15-27
- 11 Uhler K, Warner-Czyz A, Gifford R, Working Group P. Pediatric minimum speech test battery. J Am Acad Audiol 2017; 28 (03) 232-247
- 12 Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Disord 1962; 27: 62-70
- 13 Holder JT, Sheffield SW, Gifford RH. Speech understanding in children with normal hearing: sound field normative data for BabyBio, the BKB-SIN, and QuickSIN. Otol Neurotol 2016; 37 (02) e50-e55
- 14 Spahr AJ, Dorman MF, Litvak LM. et al. Development and validation of the pediatric AzBio sentence lists. Ear Hear 2014; 35 (04) 418-422
- 15 Spahr AJ, Dorman MF, Litvak LM. et al. Development and validation of the AzBio sentence lists. Ear Hear 2012; 33 (01) 112-117
- 16 Wolfe J, Neumann S, Schafer E, Marsh M, Wood M, Baker RS. Potential benefits of an integrated electric-acoustic (EAS) sound processor with children: a preliminary report. J Am Acad Audiol 2017; 28 (02) 127-140
- 17 Busch T, Vermeulen A, Langereis M, Vanpoucke F, van Wieringen A. Cochlear implant data logs predict children's receptive vocabulary. Ear Hear 2020; 41 (04) 733-746
- 18 Sharma SD, Cushing SL, Papsin BC, Gordon KA. Hearing and speech benefits of cochlear implantation in children: a review of the literature. Int J Pediatr Otorhinolaryngol 2020; 133: 109984
- 19 Chadha NK, Papsin BC, Jiwani S, Gordon KA. Speech detection in noise and spatial unmasking in children with simultaneous versus sequential bilateral cochlear implants. Otol Neurotol 2011; 32 (07) 1057-1064
- 20 Gordon KA, Papsin BC. Benefits of short interimplant delays in children receiving bilateral cochlear implants. Otol Neurotol 2009; 30 (03) 319-331
- 21 Davidson LS, Geers AE, Uchanski RM. et al. Effects of early acoustic hearing on speech perception and language for pediatric cochlear implant recipients. Journal of Speech, Language, and Hearing Research 2019; 62 (09) 3620-3637
- 22 Caldwell A, Nittrouer S.. Speech perception in noise by children with cochlear implants. Journal of Speech, Language, Hearing Research 2013; 56: 13-30
- 23 Geers A, Brenner C, Davidson L. Factors associated with development of speech perception skills in children implanted by age five. Ear and Hearing 2003; 24: 24S-35S
- 24 Tajudeen BA, Waltzman SB, Jethanamest D, Svirsky MA. Speech perception in congenitally deaf children receiving cochlear implants in the first year of life. Otology & Neurotology 2010; 31: 1254-1260
- 25 Dettman S, Wall E, Constantinescu G, Dowell R. Communication outcomes for groups of children using cochlear implants enrolled in auditory-verbal therapy, aural-oral, and bilingual-bicultural early intervention programs. Otology and Neurotology 2013; 34: 451-459