J Am Acad Audiol 2019; 30(07): 607-618
DOI: 10.3766/jaaa.17131
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Speech Recognition in Noise in Single-Sided Deaf Cochlear Implant Recipients Using Digital Remote Wireless Microphone Technology

Thomas Wesarg
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
,
Susan Arndt
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
,
Konstantin Wiebe
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
,
Frauke Schmid
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
†   University of Applied Sciences Offenburg, Offenburg, Germany
,
Annika Huber
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
†   University of Applied Sciences Offenburg, Offenburg, Germany
,
Hans E. Mülder
‡   Phonak Communications AG, Murten, Switzerland
,
Roland Laszig
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
,
Antje Aschendorff
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
,
Iva Speck
*   Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg, Faculty of Medicine, Freiburg, Germany
› Author Affiliations
Further Information

Corresponding author

Thomas Wesarg
Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg
Faculty of Medicine, Freiburg, Germany

Publication History

09 April 2018

29 May 2018

Publication Date:
25 May 2020 (online)

 

Abstract

Background:

Previous research in cochlear implant (CI) recipients with bilateral severe-to-profound sensorineural hearing loss showed improvements in speech recognition in noise using remote wireless microphone systems. However, to our knowledge, no previous studies have addressed the benefit of these systems in CI recipients with single-sided deafness.

Purpose:

The objective of this study was to evaluate the potential improvement in speech recognition in noise for distant speakers in single-sided deaf (SSD) CI recipients obtained using the digital remote wireless microphone system, Roger. In addition, we evaluated the potential benefit in normal hearing (NH) participants gained by applying this system.

Research Design:

Speech recognition in noise for a distant speaker in different conditions with and without Roger was evaluated with a two-way repeated-measures design in each group, SSD CI recipients, and NH participants. Post hoc analyses were conducted using pairwise comparison t-tests with Bonferroni correction.

Study Sample:

Eleven adult SSD participants aided with CIs and eleven adult NH participants were included in this study.

Data Collection and Analysis:

All participants were assessed in 15 test conditions (5 listening conditions × 3 noise levels) each. The listening conditions for SSD CI recipients included the following: (I) only NH ear and CI turned off, (II) NH ear and CI (turned on), (III) NH ear and CI with Roger 14, (IV) NH ear with Roger Focus and CI, and (V) NH ear with Roger Focus and CI with Roger 14. For the NH participants, five corresponding listening conditions were chosen: (I) only better ear and weaker ear masked, (II) both ears, (III) better ear and weaker ear with Roger Focus, (IV) better ear with Roger Focus and weaker ear, and (V) both ears with Roger Focus. The speech level was fixed at 65 dB(A) at 1 meter from the speech-presenting loudspeaker, yielding a speech level of 56.5 dB(A) at the recipient's head. Noise levels were 55, 65, and 75 dB(A). Digitally altered noise recorded in school classrooms was used as competing noise. Speech recognition was measured in percent correct using the Oldenburg sentence test.

Results:

In SSD CI recipients, a significant improvement in speech recognition was found for all listening conditions with Roger (III, IV, and V) versus all no-Roger conditions (I and II) at the higher noise levels (65 and 75 dB[A]). NH participants significantly benefited from the application of Roger in noise for higher levels, too. In both groups, no significant difference was detected between any of the different listening conditions at 55 dB(A) competing noise. There was also no significant difference between any of the Roger conditions III, IV, and V across all noise levels.

Conclusions:

The application of the advanced remote wireless microphone system, Roger, in SSD CI recipients provided significant benefits in speech recognition for distant speakers at higher noise levels. In NH participants, the application of Roger also produced a significant benefit in speech recognition in noise.


#

INTRODUCTION

Background

Many single-sided deaf (SSD) participants report difficulties with speech recognition in competing noise and localization of sound sources ([Wie et al, 2010]). Since 2008, SSD participants have been successfully treated with cochlear implants (CIs), starting as a therapy for chronic tinnitus ([Van de Heyning et al, 2008]). In addition to the therapeutic effect on the tinnitus, the implanted patients reported a subjective improvement of hearing and speech recognition abilities ([Vermeire and Van de Heyning, 2009]; [Buechner et al, 2010]; [Arndt et al, 2011]). Various studies have shown that SSD patients experience an objectively measurable improvement in speech recognition in noise and localization of sound sources after cochlear implantation ([Vermeire and Van de Heyning, 2009]; [Buechner et al, 2010]; [Arndt et al, 2011]; [Jacob et al, 2011]; [Firszt et al, 2012]; [Távora-Vieira et al, 2015]; [Friedmann et al, 2016]; [Arndt et al, 2017]). Even though several improvements are achieved in SSD participants with cochlear implantation, there are still multiple listening situations in which their speech recognition is limited, especially during conferences, in classrooms, and in reverberating rooms ([Giolas and Wark, 1967]; [Lieu, 2004]; [Wie et al, 2010]).

Remote wireless microphone systems were developed to improve speech recognition in the challenging listening situations mentioned previously. With these systems, the physical distance between the speaker and listener is overcome by wireless audio signal transmission. These systems consist of a microphone–transmitter placed near the mouth of the speaker and a receiver connected to a hearing aid (HA) or CI. Conventional remote wireless microphone systems use analog radio frequency transmission. These are fixed-gain or adaptive-gain (dynamic) frequency modulation (FM) systems. Advanced remote wireless microphone systems use a 2.4 GHz radio transmission band. In 2013, Phonak introduced Roger, an advanced remote wireless microphone system ([Phonak, 2013]). Roger automatically adjusts the receiver volume according to ambient noise for better speech recognition in noise than is achieved with dynamic FM systems, especially at higher competing noise levels ([Mülder and Smaka, 2013]; [Thibodeau, 2014]).

Remote wireless technology was shown to improve speech recognition in previous studies. [Schafer and Thibodeau (2006)] revealed that FM systems significantly improve speech recognition in competing noise in children with bilateral severe-to-profound hearing loss using two CIs or a CI and an HA. Significant benefits of remote wireless microphone systems were also shown in adult participants with bilateral hearing loss using CI(s) or HA(s). [Wolfe et al (2013)] showed that Roger significantly improved speech recognition in noise in bilateral CI and bimodal recipients at higher noise levels (70, 75, and 80 dB[A]). In this study, Roger also outperformed fixed-gain and adaptive-gain analog FM systems. In quiet, no significant difference in speech recognition was found between no application and the use of different remote wireless microphone systems. [Thibodeau (2014)] confirmed the benefit of Roger for speech recognition in noise in adults using HAs bilaterally. Speech recognition was significantly better with Roger compared with (fixed- and adaptive-gain) FM technology. In addition, [Thibodeau (2014)] included a normal hearing (NH) control group. This group was only assessed unaided, i.e., without using remote wireless technology.

Other remote wireless microphone systems, the Mini Microphones 1 and 2+ by Cochlear Limited (Sydney, Australia), were shown to significantly improve speech recognition in noise, too. Unilateral, bimodal, and bilateral CI recipients obtained a benefit in speech recognition when using either of the two Mini Microphones, and a better speech performance with the Mini Microphone 2+ compared with the Mini Microphone 1 ([De Ceulaer et al, 2017]). [Vroegop et al (2017)] compared speech perception in bimodal adult CI recipients for different applications of the Mini Microphone 2+. Bimodal use of the Mini Microphone 2+ yielded a significant improvement compared with unilateral use with the CI only.


#

Study Objective

SSD CI recipients and bilaterally hearing-impaired participants with unilateral or bilateral CI or bimodal CI-HA have difficulties in speech recognition for distant speakers in noise. For bilaterally hearing-impaired CI recipients, the benefit of remote wireless technology and Roger in particular has been shown in previous work ([Wolfe et al, 2013]; [Thibodeau, 2014]). To our knowledge, there are no studies on the application of any remote wireless microphone system in SSD CI recipients. The aim of our study is to determine whether SSD CI recipients benefit from the application of Roger in speech recognition of distant speakers in multi-source background noise. Our hypothesis is that the use of Roger in SSD CI recipients either on the NH side, the CI side, or bilaterally, i.e., in each unilateral and bilateral Roger condition, provides a benefit in speech recognition of distant speakers in noise compared with not using Roger.

NH participants also show difficulties in speech recognition of distant speakers in noise as seen in the large performance drop at noise levels higher than 60 dB(A) reported by [Thibodeau (2014)]. To our knowledge, there have been no studies on the application of remote wireless technology in NH participants so far. Consequently, an NH group was included in our study to investigate speech recognition of a distant speaker in background noise for different Roger applications and without Roger. It is hypothesized that NH participants also benefit from a unilateral (on either ear) and bilateral application of Roger.


#
#

MATERIALS AND METHODS

This study was conducted in accordance with the guidelines of the Declaration of Helsinki (Washington, World Medical Association, 2013), was approved by the ethics committee of the University of Freiburg, and all participants signed informed consent forms.

Participants

Two groups were included in this study, participants with acquired SSD who were implanted with a CI (SSD CI recipients) and a NH control group (NH participants). All included participants were required to be aged ≥18 years and speak German as their native language.

For the SSD CI group, the following additional inclusion criteria were applied:

  • Nearly NH in the better hearing ear, defined as air conduction pure-tone thresholds from 125 Hz to 4 kHz of equal to or less than 30 dB HL corresponding to the SSD definition according to [Vincent et al (2015)]. In the following text, the nearly NH ear is referred to as NH ear in the SSD CI recipients.

  • Unilateral CI from Cochlear Limited.

  • CI speech processor: Freedom SP, CP810, or CP910.

  • Listening experience with the CI of at least three months.

  • Freiburg monosyllabic word recognition at 65 dB SPL of at least 50% with the CI assessed for presentation of speech in free field with the contralateral NH ear masked by speech-masking noise of 70 dB SPL.

NH participants needed to show air conduction pure-tone thresholds of 20 dB HL or less for all frequencies with each ear. The ear with the smaller four-frequency (0.5, 1, 2, and 4 kHz) pure-tone average was considered as the better ear.

Eleven adult SSD CI recipients using CIs from the company Cochlear, and eleven adult NH participants were included in this study. The SSD CI recipients were 46.1 ± 14.3 years old, and the NH participants were aged 25.1 ± 5.5 years. All recipients had used their CI for at least 12 months. [Tables 1] and [2] display the information on CI recipients and NH participants, respectively.

Table 1

SSD CI Recipients’ Characteristics

Recipient

Age at Testing (Years)

Gender

CI Side

Implant

Speech Processor

Duration of CI Use (Months)

Etiology of Unilateral Severe-to-profound-Hearing Loss

Duration of Unilateral Severe-to-profound Hearing Loss (Months)

AC PTA4 (dB HL)

Monosyllabic Word Recognition at 65 dB SPL with CI (%)

Normal-Hearing Ear

CI Ear

CI1

24.5

F

R

CI422

CP910

23

Sudden hearing loss

14

8.75

112.5

70

CI2

37.4

M

R

CI24RE (CA)

Freedom SP

71

Cholesteatoma

3

15

116.25

100

CI3

36.8

F

L

CI24RE (CA)

CP810

33

Acoustic neuroma

9

5

130

80

CI4

59.5

F

R

CI512

CP810

62

Stapes surgery

48

17.5

130

95

CI5

31.6

M

R

CI24RE (CA)

CP910

16

Cholesteatoma

120

5.5

130

60

CI6

63.9

F

R

CI422

CP810

39

Sudden hearing loss

48

9

130

60

CI7

61.0

M

L

CI24RE (CA)

Freedom SP

73

Sudden hearing loss

9

11.75

130

60

CI8

63.7

F

R

CI24RE (CA)

CP910

20

M. Menière

9

16

130

60

CI9

49.7

M

R

CI24RE (CA)

CP910

26

Perilymph fistula

4

13.25

130

100

CI10

33.2

F

L

CI512

CP810

61

Perilymph fistula

36

6.25

130

80

CI11

45.3

F

L

CI24RE (CA)

CP910

33

Perilymph fistula

14

14

130

60

Median

45.3

33

14

11.75

130

70

Mean ± SD

46.1 ± 14.3

41.5 ± 20.2

28.5 ± 33.0

11.1 ± 4.4

127.2 ± 4.4

75.0 ± 16.1

Notes: AC = air conduction, F = female, M = male, R = right, L = left, SD = standard deviation, PTA4 = four-frequency pure-tone average.


Table 2

NH Participants’ Characteristics

Subject

Age at Testing (Years)

Gender

Better Ear Side (NHbe)

AC PTA4 (dB HL)

Better Ear

Weaker Ear

NH1

38.3

M

Left

0.50

1.75

NH2

30.3

F

Right

5.00

7.00

NH3

21.7

M

Right

2.00

5.75

NH4

21.9

F

Left

2.75

3.00

NH5

26.6

M

Left

3.00

7.75

NH6

21.5

M

Right

2.00

3.75

NH7

23.9

M

Right

4.25

4.75

NH8

18.1

F

Right

3.00

4.75

NH9

22.5

F

Right

2.50

7.00

NH10

27.1

M

Left

3.25

3.50

NH11

27.3

F

Right

5.00

6.25

Median

23.9

3.00

4.75

Mean ± SD

25.4 ± 5.3

3.02 ± 1.29

5.02 ± 1.81

Notes: AC = air conduction, PTA4 = four-frequency pure-tone average, F = female, M = male, SD = standard deviation.



#

CI and Roger Adjustment

Before testing, every SSD CI recipient was provided with a loaner speech processor CP910 to be applied during testing. For all recipients, the individual favorite everyday-program settings (clinical settings) were transferred from their own processor to program 1 of the loaner CP910. Program 1 of the loaner processor was altered according to the Roger for CP910 fitting guide ([Phonak, 2014]) addressing the adjustment of the accessory mixing ratio, the sound processing algorithms, and the microphone sensitivity (research settings). During testing, the research settings were applied.

With their clinical settings, all recipients used a microphone sensitivity of 12, whereas the used sound processing algorithms (none in participants CI1, CI4, and CI11; Adaptive Dynamic Range Optimization (ADRO) in CI7; Autosensitivity control (ASC) and ADRO in CI2, CI3, CI6, and CI10; ASC, ADRO, Background Noise Reduction, and Wind Noise Reduction in CI5 and CI9; and Whisper, ADRO, Background Noise Reduction and Wind Noise Reduction in CI8) and volume setting (5 in participant CI5; 6 in CI1-CI4, CI6, CI7, CI10, and CI11; and 7 in CI8 and CI9) differed across recipients.

According to the research settings, the accessory mixing ratio was set to the default value of 1:1 in program 1 of the loaner CP910 for all recipients. This mixing ratio controls the emphasis between the input from the speech processor microphones and the input from connected audio accessories. With a mixing ratio of 1:1, both inputs have equal weight. For larger mixing ratios, e.g., 3:1, the microphone input is attenuated to a certain amount, e.g., to a third in the case of 3:1, reducing the audibility of sounds directly reaching the speech processor via the microphones and thus providing audio accessory precedence. Furthermore, the sound processing algorithms ASC and ADRO were enabled, and the microphone sensitivity was set to 12, both in line with the Roger for CP910 fitting guide. Conforming to our clinical practice, the T- and C-levels were refitted based on subjective feedback. Major modifications (>±16 CL for at least one T- or C-level) between the clinical and research settings were made in CI8, CI9, and CI11, and minor changes (<±8 CL for all T- and C-levels) in CI1, CI2, and CI4-CI6, whereas there were no T- or C-level modifications in CI3, CI7, and CI10. Following these loaner speech processor adjustments, the SSD CI recipients used this processor for approximately one hour to allow for acclimatization. During this period, they talked to the investigator and to other CI recipients and their accompanying persons in the examination room and dining room of our center. Participants were allowed to adjust the volume during the acclimatization phase. The volume setting at the end of this phase (5 in participant CI1; 6 in CI2-CI7 and CI10; 7 in CI8 and CI9; and 9 in CI11) was used during testing.

The Roger system was used as advanced wireless technology in both groups. This system consists of a wireless microphone (e.g., Roger Pen, Roger Touchscreen Mic, or Roger Table Mic) and one or several different receivers compatible with most recent CI processors and HAs ([Phonak, 2013]). In our study, the Roger Pen was used as the wireless microphone, and the receivers Roger 14 with the CP910 and Roger Focus with the NH ear(s) were applied, both with a gain of 0 dB. The Roger Pen was set to handheld mode (lanyard mode). In this mode, an adaptive beamformer yielding a directional microphone characteristic is applied (Bernadette Fulton [Phonak Communications AG], personal communication, 2018).


#

Stimuli and Equipment

Speech recognition in noise was assessed in a meeting room (8.12 m × 6.11 m) with an ambient noise level of approximately 30 dB(A). For each test condition, one randomly selected list of the Oldenburg sentence test (OLSA; [Wagener et al, 1999a],[b]) with 30 sentences was administered, and speech recognition was measured in percent correct. As competing noise, the classroom noise established and applied in the study of [Schafer and Thibodeau (2006)] was used. This noise was a digitally edited first-, second-, third- and fourth-grade school classroom noise, which matches the long-term average spectrum of the speech material used in their study (Hearing in Noise Test).

[Figure 1] shows the room dimensions and experimental setup which are comparable with the settings used by [Wolfe et al (2013)] and [Thibodeau (2014)]. A Dell Optiplex 790 PC (Dell Inc., Round Rock, TX) with a Fireface UC soundcard (Audio AG, Haimhausen, Germany) was used to deliver the speech stimuli and competing noise. The OLSA sentences were presented by a Fostex 6301BX single-cone loudspeaker with a built-in amplifier (loudspeaker 5; Foster Electric Co., Ltd., Tokyo, Japan). The participants were seated 5.5 meters (m) from the front of loudspeaker 5. The speech level was 65 dB(A) at a distance of 1 m from the front of this speaker. At the participant’s head, the speech level was 56.5 dB(A), i.e., 8.5 dB lower than at the shorter distance of 1 m.

Zoom Image
Figure 1 Room dimensions and equipment arrangement used for the assessment of speech recognition in competing noise for a distant speaker. Loudspeakers 1–4 were used for presentation of uncorrelated classroom noise and loudspeaker 5 for speech presentation. The Roger Pen is placed 20 cm away from the front edge of loudspeaker 5 and 5.3 m away from the middle of the participant’s head. The loudspeakers 1–4 were placed at 32.2° to present the noise toward the middle point of the experimental setup.

The competing noise was presented in an uncorrelated fashion from four Genelec 8030B loudspeakers (1–4) (Genelec Oy, Iisalmi, Finland) located close to the four corners of the room (experimental setup: 7.3 m × 4.6 m). These speakers were positioned to face the middle point of the experimental setting, resulting in an angle of 32.2° azimuth ([Figure 1]). Noise levels investigated were 55, 65, and 75 dB(A), set to be the same at the location of the participant’s head and at the position of the Roger Pen resulting in signal-to-noise ratios of 1.5, −8.5, and −18.5 dB at the participant’s head, respectively. All sound levels were measured with an Acoustilyzer AL1 (NTi Audio AG, Switzerland) sound level meter. The Roger Pen was horizontally positioned at a distance of 20 cm in front of loudspeaker 5 at a height of 1.15 m, mimicking the vertical position of a Roger Pen worn by a speaker around the neck.


#

Test Conditions and Procedure

For both groups, SSD CI recipients and NH participants, speech recognition in competing noise was measured in five listening conditions, two no-Roger (I and II) and three Roger conditions (III, IV, and V), for each of the three noise levels, 55, 65, and 75 dB(A), i.e., in 15 test conditions ([Table 3]). The sequence of the test conditions was randomized across participants.

Table 3

Listening Conditions Assessed in Both Groups

Listening Condition

SSD CI Recipients

NH Participants

No-Roger conditions

I

NH ear, CI turned off (NH-only)

Better ear, weaker ear masked (NHbe-only)

II

NH ear, CI (turned on) (NH+CI)

Better ear, weaker ear (NHbe+NHwe)

Roger conditions

III

NH ear, CI with Roger 14 (NH+CI/Rog14)

Better ear, weaker ear with Roger Focus (NHbe+NHwe/RogF)

IV

NH ear with Roger Focus, CI (NH/RogF+CI)

Better ear with Roger Focus, weaker ear (NHbe/RogF+NHwe)

V

NH ear with Roger Focus, CI with Roger 14 (NH/RogF+CI/Rog14)

Both ears with Roger Focus (NHbe/RogF+NHwe/RogF)

Notes: In the NH participants, the ear with the smaller four-frequency pure-tone average was considered the better ear.


Before testing, a training of speech recognition in competing classroom noise was conducted in both groups. During training, speech recognition was assessed for one list of 20 OLSA sentences presented at 56.5 dB(A) in noise at levels of 55 dB(A) as well as 75 dB(A) in the no-Roger listening condition NH+CI (SSD CI group) or NHbe+NHwe (better NH ear [NHbe] and weaker NH ear [NHwe]; NH group). The stimuli and equipment used during training were identical to those used during testing. Before each training run and test, the participants were instructed to repeat the words of the OLSA sentences presented. For communication between the participant and the investigator, another Roger Pen was placed with a lanyard around the neck of the participant and connected to a Roger MyLink with attached earphones used by the investigator.


#

Data Analysis

The statistical analysis was carried out in GNU R ([R Core Team, 2014]). For each group, SSD CI recipients and NH participants, a separate two-way repeated-measures analysis was conducted with two within-subject factors: listening condition (I, II, III, IV, and V) and noise level (55, 65, and 75 dB[A]). To examine the influences of the significant main and interaction effects, post hoc analyses were conducted with pairwise comparison t-tests with pooled standard deviation and Bonferroni correction. A level of significance of 0.05 was applied.


#
#

RESULTS

SSD CI Recipients

[Figure 2] displays the box-and-whisker plots of speech recognition in noise scores of the SSD CI recipients at three noise levels for each of five listening conditions. The means and standard deviations of these scores are specified in [Table 4]. For speech recognition in competing noise, there was a significant main effect of the listening condition [F (1,4) = 428.6, p < 0.001] and a significant main effect of the noise level [F (1,2) = 381.7, p < 0.001]. In addition, a significant interaction effect was found between the listening condition and noise level [F (1,8) = 95.1, p < 0.001].

Zoom Image
Figure 2 Box-and-whisker plots of speech recognition of 11 SSD CI recipients attained for OLSA sentences at 56.5 dB(A) at three noise levels of competing classroom noise for each of five listening conditions.
Table 4

Means and Standard Deviations of Speech Recognition in Noise Scores at the Three Noise Levels for Each of Five Listening Conditions

Noise Levels

Listening Condition

SSD CI Recipients: Mean ± SD (%)

NH Participants: Mean ± SD (%)

55 dB(A)

I

95.05 ± 5.35

98.00 ± 2.96

II

94.73 ± 3.12

99.34 ± 1.23

III

98.60 ± 1.70

99.82 ± 0.43

IV

99.21 ± 1.56

99.69 ± 0.46

V

99.20 ± 1.76

99.39 ± 1.08

65 dB(A)

I

22.80 ± 15.61

41.09 ± 21.71

II

25.15 ± 15.67

48.12 ± 17.35

III

93.74 ± 9.81

99.27 ± 1.08

IV

99.09 ± 0.91

99.94 ± 0.21

V

99.63 ± 0.82

99.51 ± 1.20

75 dB(A)

I

0.00 ± 0.00

0.92 ± 1.57

II

0.18 ± 0.43

1.58 ± 1.99

III

81.89 ± 12.29

88.24 ± 11.23

IV

88.72 ± 7.59

91.71 ± 5.84

V

96.85 ± 3.71

94.00 ± 6.08

Notes: Listening conditions in SSD CI recipients: I: only NH ear, CI turned of, II: NH ear and CI, III: NH ear and CI with Roger 14, IV: NH ear with Roger Focus and CI, and V: NH ear with Roger Focus and CI with Roger 14. Listening conditions in NH participants: I: only better ear and weaker ear masked, II: both ears, III: better ear and weaker ear with Roger Focus, IV: better ear with Roger Focus and weaker ear, and V: both ears with Roger Focus.


Post hoc analyses were conducted for both the main effects and the interaction effect. Significant differences were found between each of the no-Roger conditions (NH-only and NH+CI) and each of the Roger conditions (NH+CI/Rog14, NH/RogF+CI, and NH/RogF+CI/Rog14) across noise levels. All pairwise comparisons between Roger conditions and between no-Roger conditions revealed no significant difference.

Speech recognition at the noise level of 55 dB(A) was significantly better than speech recognition at noise levels of 65 dB(A) (p < 0.001) and 75 dB(A) (p < 0.001), whereas the performance at 65 dB(A) was not significantly different from that at 75 dB(A) (p > 0.05) across listening conditions.

In all Roger conditions, speech recognition in noise was significantly better than in all no-Roger conditions at noise levels 65 and 75 dB(A) (p < 0.001 and p < 0.01). There was no significant difference between any of the Roger and no-Roger conditions at 55 dB(A). The speech recognition in noise shows a ceiling effect for all listening conditions at the lowest noise level (55 dB[A]) and for all Roger conditions at the higher noise levels (65 and 75 dB[A]). Further details are listed in [Table 5]. [Figure 3] summarizes the benefits in speech recognition in noise obtained at three noise levels as the differences in speech recognition between each of the Roger conditions III-V and the no-Roger condition II and between the Roger conditions IV and III.

Table 5

Results of the Pairwise Comparisons of the Interaction Effect Between Listening Condition and Noise Level in the SSD CI Recipients

55 I

55 II

55 III

55 IV

55 V

65 I

65 II

65 III

65 IV

65 V

75 I

75 II

75 III

75 IV

75 V

55 I

-

-

-

-

-

-

-

-

-

-

-

-

-

-

55 II

n.s.

-

-

-

-

-

-

-

-

-

-

-

-

-

55 III

n.s.

n.s.

-

-

-

-

-

-

-

-

-

-

-

-

55 IV

n.s.

n.s.

n.s.

-

-

-

-

-

-

-

-

-

-

-

55 V

n.s.

n.s.

n.s.

n.s.

-

-

-

-

-

-

-

-

-

-

65 I

[***]

[***]

[***]

[***]

[***]

-

-

-

-

-

-

-

-

-

65 II

[***]

[***]

[***]

[***]

[***]

n.s.

-

-

-

-

-

-

-

-

65 III

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

-

-

-

-

-

-

-

65 IV

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

n.s.

-

-

-

-

-

-

65 V

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

n.s.

n.s.

-

-

-

-

-

75 I

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

-

-

-

-

75 II

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

n.s.

-

-

-

75 III

[**]

[**]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

-

-

75 IV

n.s.

n.s.

[*]

[*]

[*]

[***]

[***]

n.s.

[*]

[*]

[***]

[***]

n.s.

-

75 V

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

n.s.

n.s.

n.s.

[***]

[***]

[*]

n.s.

Notes: n.s. = not significant.


* p < 0.05; **p < 0.01; ***p < 0.001; 55: 55 dB(A), 65: 65 dB(A), 75: 75 dB(A), I: only NH ear, CI turned off, II: NH ear and CI, III: NH ear and CI with Roger 14, IV: NH ear with Roger Focus and CI, and V: NH ear with Roger Focus and CI with Roger 14.


Zoom Image
Figure 3 Box-and-whisker plots of speech recognition benefit of 11 SSD CI recipients at three noise levels for each of four listening condition comparisons.

#

NH Participants

For NH participants, the box-and-whisker plots of speech recognition in noise scores at three noise levels for each of five listening conditions are shown in [Figure 4]. The means and standard deviations of these scores are displayed in [Table 4]. In NH participants, a significant main effect of the listening condition [F (1,4) = 361.9, p < 0.001] and a significant main effect of the noise level [F (1,2) = 408.9, p < 0.001] were found. In addition, a significant interaction effect between listening condition and noise level [F (1,8) = 102.9, p < 0.001] was detected.

Zoom Image
Figure 4 Box-and-whisker plots of speech recognition of 11 NH participants attained for OLSA sentences at 56.5 dB(A) at three noise levels of competing classroom noise for each of five listening conditions.

Post hoc analyses were conducted for the main effects and the interaction effect. Similar to the SSD CI recipients, there was a significant difference between each of the no-Roger conditions (NHbe-only and NHbe+NHwe) and each of the Roger conditions (NHbe+NHwe/RogF, NHbe/RogF+NHwe, and NHbe/RogF+NHwe/RogF) across noise levels. As in the SSD CI recipients, there was no significant difference for any pairwise comparison between Roger conditions and between no-Roger conditions.

At the noise level of 55 dB(A), speech recognition was significantly better than at 65 and 75 dB(A) (p < 0.001 and p < 0.01) across listening conditions. In addition, NH participants showed significantly better speech recognition at 65 dB(A) compared with 75 dB(A) (p < 0.001).

As in SSD CI recipients, speech recognition in noise was significantly better for all Roger conditions than for all no-Roger conditions at noise levels 65 and 75 dB(A) (p < 0.001 and p < 0.01). There were no significant differences between all Roger and no-Roger conditions at 55 dB(A). Similar to the SSD CI recipients, speech recognition in noise shows a ceiling effect for all listening conditions at the lowest noise level (55 dB[A]) and for all Roger conditions at the higher noise levels (65 and 75 dB[A]). Further details are displayed in [Table 6]. The benefits in speech recognition in noise obtained at three noise levels as the differences in speech recognition between each of the Roger conditions III–V and the no-Roger condition II are shown in [Figure 5].

Table 6

Results of the Pairwise Comparisons of the Interaction Effect between Listening Condition and Noise Level in the NH Participants

55 I

55 II

55 III

55 IV

55 V

65 I

65 II

65 III

65 IV

65 V

75 I

75 II

75 III

75 IV

75 V

55 I

-

-

-

-

-

-

-

-

-

-

-

-

-

-

55 II

n.s.

-

-

-

-

-

-

-

-

-

-

-

-

-

55 III

n.s.

n.s.

-

-

-

-

-

-

-

-

-

-

-

-

55 IV

n.s.

n.s.

n.s.

-

-

-

-

-

-

-

-

-

-

-

55 V

n.s.

n.s.

n.s.

n.s.

-

-

-

-

-

-

-

-

-

-

65 I

[***]

[***]

[***]

[***]

[***]

-

-

-

-

-

-

-

-

-

65 II

[***]

[***]

[***]

[***]

[***]

n.s.

-

-

-

-

-

-

-

-

65 III

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

-

-

-

-

-

-

-

65 IV

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

n.s.

-

-

-

-

-

-

65 V

n.s.

n.s.

n.s.

n.s.

n.s.

[***]

[***]

n.s.

n.s.

-

-

-

-

-

75 I

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

-

-

-

-

75 II

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

[***]

n.s.

-

-

-

75 III

[**]

[**]

[**]

[**]

[**]

[***]

[***]

[**]

[**]

[**]

[***]

[***]

-

-

75 IV

n.s.

n.s.

n.s.

[***]

[***]

[***]

[***]

n.s.

n.s.

n.s.

[***]

[***]

n.s.

-

75 V

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

n.s.

Notes: n.s. = not significant.


* p < 0.05; **p < 0.01; ***p < 0.001; 55: 55 dB(A), 65: 65 dB(A), 75: 75 dB(A), I: only better ear and weaker ear masked, II: both ears, III: better ear and weaker ear with Roger Focus, IV: better ear with Roger Focus and weaker ear, and V: both ears with Roger Focus.


Zoom Image
Figure 5 Box-and-whisker plots of speech recognition benefit of 11 NH participants at three noise levels for each of three different comparisons of listening conditions.

#
#

DISCUSSION

This study demonstrates the overall benefit of the remote wireless microphone system, Roger, for the first time in SSD CI recipients.

In adult SSD participants, speech perception in competing noise ([Vermeire and Van de Heyning, 2009]) and localization of sound sources ([Arndt et al, 2011]) improve after cochlear implantation. One of the greatest difficulties SSD CI recipients face is listening to a distant speaker in multi-source competing background noise. We simulated this condition in our study and found that even when the recipients’ CI was turned on, they demonstrated limited CI benefit in such listening environments. However, our analysis revealed that SSD CI recipients show a significant improvement in speech recognition for a distant speaker in multi-source noise at higher noise levels with the addition of Roger. These results thus demonstrate a clear benefit of the application of Roger for speech perception in background noise in these recipients.

Our data are in good agreement with previous studies showing a significant benefit of the application of advanced remote wireless microphone systems in bilaterally moderate-to-severe hearing-impaired, unilateral and bilateral CI recipients for speech recognition of distant speakers in noise ([Wolfe et al, 2013]; [Thibodeau 2014]; [De Ceulaer et al, 2017]). Furthermore, we were able to demonstrate for the first time a significant benefit of the application of a remote wireless microphone system in SSD CI recipients. This result extends previous findings of the beneficial effect of remote wireless technology on speech recognition in noise to SSD CI recipients and is clinically relevant as it encourages the application of Roger in these recipients. Based on the results of our study, remote wireless microphone technology should be recommended and tested in routine clinical practice by SSD CI recipients reporting difficulties in challenging listening situations. And in case of a positive evaluation, SSD CI recipients should be provided with such a system.

Comparing different applications of Roger in SSD CI recipients, no significant difference between unilateral (either NH ear or CI) or bilateral use was found. Therefore, no recommendation for unilateral or bilateral Roger application can be made on the basis of our results. The lacking significant difference between Roger conditions could be in part due to the saturation of speech recognition observed in all Roger conditions and across all noise levels. In addition, the speech performance and performance benefits obtained by the SSD CI recipients with the Roger unilaterally attached to the CI might depend on their speech recognition with the CI. In our study, this aspect was not addressed, as only SSD CI recipients with good CI performance were included. Therefore, research including SSD CI recipients with poorer speech recognition would be interesting. Presumably, poorer CI performers will obtain better speech recognition with the application of Roger Focus on the NH ear than with Roger 14 on the CI. Moreover, beyond audiometry-based performance and benefit assessment, subjective preferences and other outcome measures, such as subjective and objective listening effort, should be assessed to be able to give a definite recommendation.

The potential benefits of the application of Roger in NH participants were also examined in our study. [Thibodeau (2014)] showed that NH participants without any remote wireless technology attained a poorer (unaided) speech performance in background noise at high levels than CI recipients aided with an FM system or Roger. By contrast, our study addressed speech recognition for a distant speaker in multi-source noise in NH participants with unilateral and bilateral application of Roger and without Roger. Similar to the results of the SSD CI recipients obtained in our study, unilateral (either ear) and bilateral application of Roger technology was significantly beneficial in NH participants at higher noise levels. On the basis of our data, recommendation and testing of remote wireless microphone systems in difficult acoustic situations should be considered by ENT physicians or audiologists in NH participants, too. Especially for NH participants with subjectively perceived impairment of speech recognition in everyday noisy listening situations, the application of remote wireless systems could be a beneficial option.

In our study, only SSD CI recipients provided with CIs from Cochlear Limited were assessed using a loaner CP910 during testing. In addition, only one type of remote microphone system, Roger, was tested. These criteria were chosen to minimize potentially confounding factors. Unlike e.g., [Wolfe et al (2013)] and [Thibodeau (2014)], we deliberately excluded the noise level of 80 dB(A). The maximum noise level assessed was limited to 75 dB(A) to protect the participants from additional noise exposure.

Besides background noise, speakers at greater distances represent another challenge in daily life listening situations for SSD CI recipients and NH participants alike. With greater distance between speaker and listener, the benefit of the application of remote wireless technology seems to increase ([De Ceulaer et al, 2017]). Our study, [Wolfe et al (2013)] and [Thibodeau (2014)] examined speech recognition in (multi-source competing background) noise in settings with speaker-to-participant distances of 5.2–5.5 m and showed significant improvements in speech recognition. By contrast, [De Ceulaer et al (2017)] chose a distance between the speaker and participant of maximum 3 m. Various speaker-to-participant distances hamper direct comparisons between studies investigating remote wireless systems, and thus the comparison between Roger and Mini Microphones 1 and 2+ in the studies described previously.

Additional to the benefits of Roger in SSD CI recipients and NH participants confirmed in our study, further questions remain to be investigated in future research. To begin with, there is no study directly comparing the speech-in-noise performance between NH listeners and SSD CI recipients; this should be addressed in future work. We did not perform this comparison in our study because the participants included in the two groups, SSD CI recipients and NH participants, differed vastly in their characteristics, e.g., age and pure-tone thresholds of the better hearing ear. Therefore, the comparison of participants with unmatched age and hearing abilities might reduce its scientific impact. Our study focused on adult SSD CI recipients and did not investigate the potential benefit of the application of remote wireless microphone systems in SSD CI children. This is of enormous clinical interest, as children are confronted with challenging listening situations on a daily basis, e.g., classroom-like environments.

In our study, we used a mixing ratio of 1:1 according to the Roger for CP910 fitting guide ([Phonak, 2014]). [Hey et al (2009)] proposed a mixing ratio of 1:1–3:1 in a listening situation with two potentially interfering speakers (teacher and fellow student in a discussion) and a mixing ratio greater than 3:1 in a listening situation with one main speaker (classical lecture format). Additional research about different mixing ratios using advanced remote technology in SSD CI recipients in various hearing situations could provide further insight into which mixing ratio to apply in various listening situations.


#

CONCLUSIONS

  • The results of our study show that the use of a digital adaptive remote microphone system (Roger) provides significant benefits in speech recognition for distant speakers in multi-source competing background noise at higher levels for SSD CI recipients.

  • A significant benefit of the advanced remote wireless microphone system, Roger, was also shown for NH participants.

  • In both groups, there is no significant difference between the application of Roger on the better ear (NH ear), the weaker ear (NH ear or CI), or both ears.


#

Abbreviations

ADRO: Adaptive Dynamic Range Optimization
ASC: Autosensitivity control
CI: cochlear implant
FM: frequency modulation
HA: hearing aid
NH: normal hearing
NHbe : better normal hearing ear
NHwe : weaker normal hearing ear
OLSA: Oldenburg sentence test
SSD: single-sided deafness


#

No conflict of interest has been declared by the author(s).

Acknowledgments

The authors thank the association “Taube Kinder lernen Hören e.V.” for its considerable support of the cochlear implant rehabilitation center in Freiburg. In addition, we want to thank D. Hilgert-Becker (BECKER Hörakustik, Koblenz, Germany) for the idea of this study and J. Eysell and D. Babbel for the language revision of the manuscript.

The study was supported by Phonak Communications AG (Murten, Switzerland). The funds were used for the equipment, remuneration of two research students (F. Schmid and A. Huber) and the reimbursement of participants’ travel costs. The other authors of this manuscript did not receive any monetary compensation for this study.


Parts of the paper were presented orally at the 13th EFAS Congress, Interlaken, Switzerland, June 7–10, 2017.


  • REFERENCES

  • Arndt S, Aschendorff A, Laszig R, Beck R, Schild C, Kroeger S, Ihorst G, Wesarg T. 2011; Comparison of pseudobinaural hearing to real binaural hearing rehabilitation after cochlear implantation in patients with unilateral deafness and tinnitus. Otol Neurotol 32 (01) 39-47
  • Arndt S, Laszig R, Aschendorff A, Hassepass F, Beck R, Wesarg T. 2017; Cochlear implant treatment of patients with single-sided deafness or asymmetric hearing loss. HNO 65 (07) 586-598
  • Buechner A, Brendel M, Lesinski-Schiedat A, Wenzel G, Frohne-Buechner C, Jaeger B, Lenarz T. 2010; Cochlear implantation in unilateral deaf subjects associated with ipsilateral tinnitus. Otol Neurotol 31 (09) 1381-1385
  • De Ceulaer G, Pascoal D, Vanpoucke F, Govaerts P. 2017; The use of cochlear’s SCAN and wireless microphones to improve speech understanding in noise with the Nucleus 6® CP900 processor. Int J Audiol 56 (11) 837-843
  • Firszt JB, Holden LK, Reeder RM, Waltzman SB, Arndt S. 2012; Auditory abilities after cochlear implantation in adults with unilateral deafness: a pilot study. Otol Neurotol 33 (08) 1339-1446
  • Friedmann DR, Ahmed OH, McMenomey SO, Shapiro WH, Waltzman SB, Roland Jr JT. 2016; Single-sided deafness cochlear implantation: candidacy, evaluation, and outcomes in children and adults. Otol Neurotol 37 (02) e154-160
  • Giolas TG, Wark DJ. 1967; Communication problems associated with unilateral hearing loss. J Speech Hear Disord 32 (04) 336-343
  • Hey M, Anft D, Hocke T, Scholz G, Hessel H, Begall K. 2009; [Influence of mixing ratios of a FM-system on speech understanding of CI-users]. Laryngorhinootologie 88 (05) 315-321
  • Jacob R, Stelzig Y, Nopp P, Schleich P. 2011; Audiologische Ergebnisse mit Cochlear implant bei einseitiger Taubheit. HNO 59 (05) 453-460
  • Lieu JE. 2004; Speech-language and educational consequences of unilateral hearing loss in children. Arch Otolaryngol Head Neck Surg 130 (05) 524-530
  • Mülder HE, Smaka C. 2013 Interview with Dr. Hans E. Mülder, Director Marketing and Senior Audiologist at Phonak Communications, Phonak Headquarters, Switzerland. AudiologyOnline (Online). http://www.audiologyonline.com/interviews/interview-with-drs-hans-e-11727 . Accessed June 20, 2017
  • Phonak AG. 2013 Phonak Insight|Roger Pen–Bridging the understanding gap. 028-0933-02/V1.00/2013-09/8G/ (Online). https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/white_paper/documents/Insight_Roger_Pen_028-0933.pdf . Accessed June 20, 2017
  • Phonak AG. 2014 Fitting Guide Roger and Cochlear sound processors Nucleus 5 and Nucleus 6. (Online). https://www.phonakpro.com/content/dam/phonak/gc_hq/b2b/en/products/roger/receivers/_downloads/Fitting_Guide_Roger_Cochlear_Nucleus.pdf . Accessed August 4, 2015
  • R Core Team 2014. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; (Online). https://www.R-project.org/ . Accessed June 20, 2017
  • Schafer EC, Thibodeau LM. 2006; Speech recognition in noise in children with cochlear implants while listening in bilateral, bimodal, and FM system arrangements. Am J Audiol 15 (02) 114-126
  • Távora-Vieira D, De Ceulaer G, Govaerts PJ, Rajan GP. 2015; Cochlear implantation improves localization ability in patients with unilateral deafness. Ear Hear 36 (03) e93-98
  • Thibodeau L. 2014; Comparison of speech recognition with adaptive digital and FM remote microphone hearing assistance technology by listeners who use hearing aids. Am J Audiol 23 (02) 201-210
  • Van de Heyning P, Vermeire K, Diebl M, Nopp P, Anderson I, De Ridder D. 2008; Incapacitating unilateral tinnitus in single-sided deafness treated by cochlear implantation. Ann Otol Rhinol Laryngol 117 (09) 645-652
  • Vermeire K, Van de Heyning P. 2009; Binaural hearing after cochlear implantation in subjects with unilateral sensorineural deafness and tinnitus. Audiol Neurootol 14 (03) 163-171
  • Vincent C, Arndt S, Firszt JB, Fraysse B, Kitterick PT, Papsin BC, Snik A, Van de Heyning P, Deguine O, Marx M. 2015; Identification and evaluation of cochlear implant candidates with asymmetrical hearing loss. Audiol Neurootol 20 (1, Suppl) 87-89
  • Vroegop JL, Dingemanse JG, Homans NC, Goedegebure A. 2017; Evaluation of a wireless remote microphone in bimodal cochlear implant recipients. Int J Audiol 56 (09) 643-649
  • Wagener K, Kühnel V, Kollmeier B. 1999; a Entwicklung und Evaluation eines Satztests in deutscher Sprache I: Design des Oldenburger Satztests. Z Audiol 38 (01) 4-15
  • Wagener K, Brand T, Kollmeier B. 1999; b Entwicklung und Evaluation eines Satztests in deutscher Sprache III: Evaluation des Oldenburger Satztests. Z Audiol 38 (03) 86-95
  • Wie OB, Pripp AH, Tvete O. 2010; Unilateral deafness in adults: effects on communication and social interaction. Ann Otol Rhinol Laryngol 119 (11) 772-781
  • Wolfe J, Morais M, Schafer E, Mills E, Mülder HE, Goldbeck F, Marquis F, John A, Hudson M, Peters BR, Lianos L. 2013; Evaluation of speech recognition of cochlear implant recipients using a personal digital adaptive radio frequency system. J Am Acad Audiol 24 (08) 714-724

Corresponding author

Thomas Wesarg
Department of Otorhinolaryngology—Head and Neck Surgery, Medical Center—University of Freiburg
Faculty of Medicine, Freiburg, Germany

  • REFERENCES

  • Arndt S, Aschendorff A, Laszig R, Beck R, Schild C, Kroeger S, Ihorst G, Wesarg T. 2011; Comparison of pseudobinaural hearing to real binaural hearing rehabilitation after cochlear implantation in patients with unilateral deafness and tinnitus. Otol Neurotol 32 (01) 39-47
  • Arndt S, Laszig R, Aschendorff A, Hassepass F, Beck R, Wesarg T. 2017; Cochlear implant treatment of patients with single-sided deafness or asymmetric hearing loss. HNO 65 (07) 586-598
  • Buechner A, Brendel M, Lesinski-Schiedat A, Wenzel G, Frohne-Buechner C, Jaeger B, Lenarz T. 2010; Cochlear implantation in unilateral deaf subjects associated with ipsilateral tinnitus. Otol Neurotol 31 (09) 1381-1385
  • De Ceulaer G, Pascoal D, Vanpoucke F, Govaerts P. 2017; The use of cochlear’s SCAN and wireless microphones to improve speech understanding in noise with the Nucleus 6® CP900 processor. Int J Audiol 56 (11) 837-843
  • Firszt JB, Holden LK, Reeder RM, Waltzman SB, Arndt S. 2012; Auditory abilities after cochlear implantation in adults with unilateral deafness: a pilot study. Otol Neurotol 33 (08) 1339-1446
  • Friedmann DR, Ahmed OH, McMenomey SO, Shapiro WH, Waltzman SB, Roland Jr JT. 2016; Single-sided deafness cochlear implantation: candidacy, evaluation, and outcomes in children and adults. Otol Neurotol 37 (02) e154-160
  • Giolas TG, Wark DJ. 1967; Communication problems associated with unilateral hearing loss. J Speech Hear Disord 32 (04) 336-343
  • Hey M, Anft D, Hocke T, Scholz G, Hessel H, Begall K. 2009; [Influence of mixing ratios of a FM-system on speech understanding of CI-users]. Laryngorhinootologie 88 (05) 315-321
  • Jacob R, Stelzig Y, Nopp P, Schleich P. 2011; Audiologische Ergebnisse mit Cochlear implant bei einseitiger Taubheit. HNO 59 (05) 453-460
  • Lieu JE. 2004; Speech-language and educational consequences of unilateral hearing loss in children. Arch Otolaryngol Head Neck Surg 130 (05) 524-530
  • Mülder HE, Smaka C. 2013 Interview with Dr. Hans E. Mülder, Director Marketing and Senior Audiologist at Phonak Communications, Phonak Headquarters, Switzerland. AudiologyOnline (Online). http://www.audiologyonline.com/interviews/interview-with-drs-hans-e-11727 . Accessed June 20, 2017
  • Phonak AG. 2013 Phonak Insight|Roger Pen–Bridging the understanding gap. 028-0933-02/V1.00/2013-09/8G/ (Online). https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/white_paper/documents/Insight_Roger_Pen_028-0933.pdf . Accessed June 20, 2017
  • Phonak AG. 2014 Fitting Guide Roger and Cochlear sound processors Nucleus 5 and Nucleus 6. (Online). https://www.phonakpro.com/content/dam/phonak/gc_hq/b2b/en/products/roger/receivers/_downloads/Fitting_Guide_Roger_Cochlear_Nucleus.pdf . Accessed August 4, 2015
  • R Core Team 2014. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; (Online). https://www.R-project.org/ . Accessed June 20, 2017
  • Schafer EC, Thibodeau LM. 2006; Speech recognition in noise in children with cochlear implants while listening in bilateral, bimodal, and FM system arrangements. Am J Audiol 15 (02) 114-126
  • Távora-Vieira D, De Ceulaer G, Govaerts PJ, Rajan GP. 2015; Cochlear implantation improves localization ability in patients with unilateral deafness. Ear Hear 36 (03) e93-98
  • Thibodeau L. 2014; Comparison of speech recognition with adaptive digital and FM remote microphone hearing assistance technology by listeners who use hearing aids. Am J Audiol 23 (02) 201-210
  • Van de Heyning P, Vermeire K, Diebl M, Nopp P, Anderson I, De Ridder D. 2008; Incapacitating unilateral tinnitus in single-sided deafness treated by cochlear implantation. Ann Otol Rhinol Laryngol 117 (09) 645-652
  • Vermeire K, Van de Heyning P. 2009; Binaural hearing after cochlear implantation in subjects with unilateral sensorineural deafness and tinnitus. Audiol Neurootol 14 (03) 163-171
  • Vincent C, Arndt S, Firszt JB, Fraysse B, Kitterick PT, Papsin BC, Snik A, Van de Heyning P, Deguine O, Marx M. 2015; Identification and evaluation of cochlear implant candidates with asymmetrical hearing loss. Audiol Neurootol 20 (1, Suppl) 87-89
  • Vroegop JL, Dingemanse JG, Homans NC, Goedegebure A. 2017; Evaluation of a wireless remote microphone in bimodal cochlear implant recipients. Int J Audiol 56 (09) 643-649
  • Wagener K, Kühnel V, Kollmeier B. 1999; a Entwicklung und Evaluation eines Satztests in deutscher Sprache I: Design des Oldenburger Satztests. Z Audiol 38 (01) 4-15
  • Wagener K, Brand T, Kollmeier B. 1999; b Entwicklung und Evaluation eines Satztests in deutscher Sprache III: Evaluation des Oldenburger Satztests. Z Audiol 38 (03) 86-95
  • Wie OB, Pripp AH, Tvete O. 2010; Unilateral deafness in adults: effects on communication and social interaction. Ann Otol Rhinol Laryngol 119 (11) 772-781
  • Wolfe J, Morais M, Schafer E, Mills E, Mülder HE, Goldbeck F, Marquis F, John A, Hudson M, Peters BR, Lianos L. 2013; Evaluation of speech recognition of cochlear implant recipients using a personal digital adaptive radio frequency system. J Am Acad Audiol 24 (08) 714-724

Zoom Image
Figure 1 Room dimensions and equipment arrangement used for the assessment of speech recognition in competing noise for a distant speaker. Loudspeakers 1–4 were used for presentation of uncorrelated classroom noise and loudspeaker 5 for speech presentation. The Roger Pen is placed 20 cm away from the front edge of loudspeaker 5 and 5.3 m away from the middle of the participant’s head. The loudspeakers 1–4 were placed at 32.2° to present the noise toward the middle point of the experimental setup.
Zoom Image
Figure 2 Box-and-whisker plots of speech recognition of 11 SSD CI recipients attained for OLSA sentences at 56.5 dB(A) at three noise levels of competing classroom noise for each of five listening conditions.
Zoom Image
Figure 3 Box-and-whisker plots of speech recognition benefit of 11 SSD CI recipients at three noise levels for each of four listening condition comparisons.
Zoom Image
Figure 4 Box-and-whisker plots of speech recognition of 11 NH participants attained for OLSA sentences at 56.5 dB(A) at three noise levels of competing classroom noise for each of five listening conditions.
Zoom Image
Figure 5 Box-and-whisker plots of speech recognition benefit of 11 NH participants at three noise levels for each of three different comparisons of listening conditions.