Semin Hear 2002; 23(1): 057-076
DOI: 10.1055/s-2002-24976
Copyright © 2002 by Thieme Medical Publishers, Inc., 333 Seventh Avenue, New York, NY 10001, USA. Tel.: +1(212) 584-4662

New Thinking on Hearing in Noise: A Generalized Articulation Index

Mead C. Killion
  • Private practice, Victoria, Australia. www.hearingvision.com
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
11. April 2002 (online)

Zoom Image

ABSTRACT

The articulation index (AI) theory serves to predict word recognition scores when the number of speech cues have been reduced by noise or lack of audibility. It fails to predict how poorly some subjects do in noise, even when all speech cues have been made audible with amplification. Such subjects require an unusually large signal-to-noise ratio (SNR) for a given performance level, and are said to have a large SNR loss. We have found that the AI can be generalized to predict word recognition scores in the case of missing (speech cue) dots. Some speech cues appear to be lost on the way to the brain even though they were audible. Corliss[1] suggested the term channel capacity to describe this phenomenon, and we adopt that term for our use. In this article, the substantial psychoacoustic and physiological evidence in favor of this generalized AI is described. Perhaps the strongest evidence is (1) the SNR loss of subjects is poorly predicted by the degree of their audiometric loss, and (2) their wideband word-recognition performance in noise can be predicted from their channel capacity inferred from filtered speech experiments.