J Am Acad Audiol 2018; 29(09): 847-854
DOI: 10.3766/jaaa.17061
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination in Infants with and without Hearing Loss: Presentation Level

Kristin M. Uhler
*   University of Colorado Denver School of Medicine, Children’s Hospital Colorado, Aurora, CO
,
René H. Gifford
†   Vanderbilt University School of Medicine, Nashville, TN
,
Jeri E. Forster
‡   Rocky Mountain Mental Illness, Research, Education and Clinical Center, Denver VA Medical Center and University of Colorado Denver School of Medicine, Aurora, CO
,
Melinda Anderson
§   University of Colorado Denver School of Medicine, Aurora, CO
,
Elyse Tierney
§   University of Colorado Denver School of Medicine, Aurora, CO
,
Stacy D. Claycomb
**   University of Colorado Health, Aurora, CO
,
Lynne A. Werner
††   University of Washington, Seattle, WA
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
29. Mai 2020 (online)

Preview

INTRODUCTION

Universal newborn hearing screening has led to a decrease in the average age at identification and treatment of hearing loss (HL) ([Harrison and Roush, 1996]; [Moeller, 2000]; [Holte et al, 2012]; [Uhler et al, 2016]). Despite identification at earlier ages, there continue to be gaps in language outcomes between children with HL and their peers with normal hearing (NH). Studies that have followed children with HL from infancy through early school age have identified two important predictors of outcome: amount of hearing aid use ([Moeller et al, 2009]; [Walker et al, 2013]; [Walker et al, 2015]) and the quality of hearing aid fittings ([McCreery et al, 2013]; [2015]), suggesting that the quantity and quality of speech input are critical variables. Moreover, a higher aided speech intelligibility index (SII) is associated with better later language in preschool children ([Tomblin et al, 2014]; [2015]) and improved word recognition in school-aged children ([Stiles et al, 2012]).

The current clinical best practice of performing real-ear measures only verifies hearing aid output in the ear canal. This measure alone cannot ensure that amplification is providing infants and young toddlers with the information needed to discriminate between speech sounds—a prerequisite for learning spoken language ([Tsao et al, 2004]; [Tomblin et al, 2014]; [2015]). A clinically useful tool for directly assessing speech discrimination in infancy could help to determine that infants and toddlers with HL are fitted appropriately. Currently, the most commonly used tools for assessing speech perception in infants and toddlers are parent questionnaires ([Uhler and Gifford, 2014]), which are not objective measures of speech discrimination.

A clinically useful tool capable of assessing speech discrimination in infancy has been available since 1989 ([Gravel, 1989]). Visual Reinforcement Infant Speech Discrimination (VRISD) uses a conditioned head turn task, similar to visual reinforcement audiometry (VRA). However, rather than infants being conditioned to the presence of a tone or speech, in VRISD, the infant is conditioned to turn his/her head to a change in stimulus. VRISD has been primarily used in research laboratories, despite its relative familiarity in clinical audiology as a derivative of VRA. A lack of clinical guidelines and normative data may be one reason that VRISD has not seen widespread clinical adoption.

Establishing appropriate presentation levels is one prerequisite for the clinical application of VRISD. Nozza and colleagues showed that the relationship between speech discrimination performance in VRISD and presentation level differs between infants and adults ([Nozza and Wilson, 1984]; [Nozza, 1987]; [Nozza et al, 1991]; [Nozza, 2000]). They found that NH infants between 6 and 8 months of age required a higher presentation level in quiet and a more favorable signal-to-noise ratio than NH adults to attain maximum performance. Furthermore, [Nozza (2000)] reported that the lowest sensation level (i.e., level relative to individual detection threshold) at which infants could discriminate between /ba/ and /da/ was 20–25 dB compared with 10–15 dB for adults. These findings suggest that the typical procedure of assessing speech perception in infants at the same intensity level as used for adults may underestimate infant speech perception abilities ([Eilers et al, 1977]; [1981]; [Martinez et al, 2008]; [Fredrickson, 2010]; [Uhler et al, 2011]).

In a recent VRISD study, [Uhler et al (2015)] showed that the level at which NH infants successfully discriminated /a-i/ and /ba-da/ ranged from 35 to 70 dB SL. NH infants needed a higher presentation level to discriminate /ba-da/ than /a-i/ and consistent with the results of [Nozza (1987)], 29% were unable to discriminate /ba-da/ at the highest presentation level (70 dBA). NH infants who did not reach criterion on one or both contrasts did not significantly differ in age, gender, or audiometric thresholds from the infants who reached criterion. Thus, there is some inherent variability in the mastery of /ba-da/ discrimination, even for infants with NH, making it all the more important to directly evaluate infants with HL.

The goal of the current study was to extend the previous work with NH infants to infants and toddlers with HL. We have four primary research questions, which are approached with a goal of clinical utility and ecological validity. First, what is the presentation level at which most infants reach criterion for speech discrimination? Second, is the criterion presentation level different for infants with HL than infants with NH? Third, is there a difference in criterion presentation level for the /a-i/ contrast compared with /ba-da/? Finally, to assess whether VRISD tells us something about the quality of sound input beyond that provided by currently available measures, we investigated the relationship between aided SII and speech discrimination for infants who use hearing aids (HAs).

This project was funded by the American Academy of Audiology/American Academy of Audiology Foundation Research Grants Program and grant funding from NIH NIDCD DC013583 to KU.


Parts of this work were presented at the ARC in Indianapolis, IN, April 2017.