Subscribe to RSS
DOI: 10.1055/a-2265-9418
Measuring Hearing Aid Satisfaction in Everyday Listening Situations: Retrospective and In Situ Assessments Complement Each Other
- Abstract
- Methods
- Results
- Discussion
- Conclusion
- References
Abstract
Background Recently, we developed a hearing-related lifestyle questionnaire (HEARLI-Q), which asks respondents to rate their hearing aid (HA) satisfaction in 23 everyday listening situations. It is unknown how HA satisfaction on the retrospective HEARLI-Q scale compares with HA satisfaction measured on the same scale implemented in Ecological Momentary Assessment (EMA).
Purpose To learn how retrospective (HEARLI-Q) and in situ (EMA) assessments can complement each other.
Research Design An observational study.
Study Sample Twenty-one experienced HA users.
Data Collection and Analysis The participants first filled out the HEARLI-Q questionnaire, followed by a 1-week EMA trial using their own HAs. HA satisfaction ratings were compared between the two questionnaires and the underlying drivers of discrepancies in HA satisfaction ratings were evaluated.
Results HA satisfaction ratings were significantly higher in EMA for speech communication with one or several people. Hearing difficulty in these situations was rated higher in HEARLI-Q than in EMA, but occurrence of those difficult listening situations was also rated to be lower. When comparing only the situations that occur on daily or weekly basis, the two questionnaires had similar HA satisfaction ratings.
Conclusions Lower occurrence of difficult listening situations seems to be the key driver of discrepancies in HA satisfaction ratings between EMA and HEARLI-Q. The advantage of EMA is that it provides insight into an individual's day-to-day life and is not prone to memory bias. HEARLI-Q, on the other hand, can capture situations that occur infrequently or are inconvenient to report in the moment. Administering HEARLI-Q and EMA in combination could give a more holistic view of HA satisfaction.
#
Overall hearing aid (HA) satisfaction depends upon the individually weighted improvements one perceives with HA treatment. An individual's interpretation of HA satisfaction could considerably vary, and people may use different criteria when judging whether they are satisfied with the HA.[1] For example, one might think about comfort of fit, streaming possibilities/connectivity, speech understanding, or different aspects of sound quality. Moreover, certain situations may be given more weight than others based on frequency of occurrence, importance, severity of hearing difficulty, recency, or relationship to the present,[2] which can bias the overall HA satisfaction rating.
We developed the hearing-related lifestyle questionnaire (HEARLI-Q)[3] that asks participants to rate 23 everyday listening situations on frequency of occurrence, importance to hear well, difficulty to hear, and HA satisfaction. The HEARLI-Q situations are grouped in seven listening task categories: speech communication (two people, more than two people, and through device), focused listening (live sounds, through media device), and nonspecific (monitoring surroundings and passive listening), as defined by the Common Sound Scenarios (CoSS) framework.[4] In a study where the HEARLI-Q was administered four times (days 1, 2, 15, and 29), we found that experienced HA users' responses were reliable across short (day 1 vs. day 2) and longer (day 1 vs. day 15 or 29) time spans.[3] Nonetheless, HEARLI-Q is a retrospective questionnaire and, as such, is at risk of being subject to memory bias.[5] [6] Ecological Momentary Assessment (EMA), on the other hand, relies on participants repeatedly reporting in their momentary listening environment, reducing memory bias.[7] Because of this, EMA has gotten recent traction in audiology community to evaluate HAs in real life.[8] However, to achieve a balance between getting high-quality data and minimizing participant burden, EMA in hearing research is normally not administered for longer periods than four weeks, although on most occasions, the period is shorter and more likely to fall within one to two weeks. This means that situations that occur less frequently than the EMA period may not be represented. Moreover, since EMA requires participants to answer in the moment, some situations may be unsafe (e.g., when driving) or inappropriate (e.g., in the middle of an important meeting) to report in. Consequently, certain situations are underrepresented in EMA.[9] [10] Participants may also be less likely to interact with the EMA application in a moment that is already challenging[11] and these situations are often the most interesting when evaluating HAs.
One way to include assessments of those challenging situations in EMA is to ask participants to rate HA performance based on a short retrospective period.[12] [13] [14] [15] For example, Wu et al.[15] asked their participants to answer EMA-prompted Glasgow Hearing Aid Benefit Profile (GHABP) questionnaire every 1.5 hours based on the last 1.5 hours for one week. The authors compared these EMA ratings to the ones obtained by the standard retrospective GHABP to assess outcomes of two different HAs. While both EMA-based (in situ) GHABP and retrospective GHABP showed a significant difference between the two HAs on the satisfaction subscale, only the EMA-based GHABP was significantly different on benefit and residual disability subscales.
Recently, we conducted a study where we utilized both HEARLI-Q and EMA based on the same questions and response alternatives to investigate the effect of a “positive focus” intervention on HA satisfaction and benefit.[16] Similar to Wu et al.,[15] we observed comparable effects on HA satisfaction between retrospective and in situ questionnaires, where participants who were asked to focus on positive listening experiences for two weeks after HA fitting had higher ratings relative to the control group. Although it was not the aim of the study to compare HA satisfaction ratings between questionnaires, EMA ratings tended to be higher than HEARLI-Q ratings. In other words, although both questionnaires were sensitive to detect a difference due to the intervention, when evaluating HA performance, the satisfaction rating itself may be over/underestimated depending on the type of questionnaire utilized.
In the current study, we set out to investigate how HA satisfaction ratings on the HEARLI-Q scale compare with HA satisfaction measured on the same scale implemented in an EMA trial. The goal was to compare both, the overall HA satisfaction as well as CoSS listening task-specific HA satisfaction. In case any significant discrepancies were observed, we investigated what drove these inconsistencies in ratings. The overarching aim was to learn how the two types of assessments can complement each other to give a holistic view of HA experience.
Methods
Ethical clearance for conducting the study was obtained from the Research Ethics Committee of the Capital Region of Denmark (case no. H-18056647). The data in the current work were collected during August to November 2021.
Participants
Twenty-one HA users with mild-moderate hearing losses were enrolled in the study (6 females, 15 males with an average age of 66 years; standard deviation [SD]: 7 years). All the participants were experienced HA users (>1 year), smartphone users, and fluent in Danish. Twenty participants reported to use their HAs all day, whereas one reported to use it sporadically. Ten participants were retired at the time of the study. Exclusion criterion was severe cognitive impairment that would preclude the ability to perform the necessary tasks, as judged by the audiologist who did the recruiting. The participants were recruited through an internal database of participants via phone or e-mail. They were informed about the study orally and in writing. Before the trial commenced, the participants gave their informed consent in writing.
#
Study Design
The participants were asked to download and install the MyHearingExperience app (Lenox UG, Herrsching, Germany), which is available for iOS and Android, on their own smartphones. Each participant was provided with a unique study log-in code for the app and instructed to fill out the HEARLI-Q available in the app, on the day of the initial log-in. Starting the day after, the participants were prompted to answer an EMA questionnaire every two hours between 9:00 a.m. and 9:00 p.m. over seven days. That is, the total number of prompts was 49. The EMA questionnaire remained available from the initial prompt until the participant completed it or the next questionnaire was prompted (2 hours later or next morning for the 9:00 p.m. questionnaire). The EMA questionnaire asked the participants to indicate their location, listening task (same categories as in the HEARLI-Q), and noisiness of the situation. Further, the participants were asked to rate the frequency of occurrence, importance to hear well, hearing difficulty, and HA satisfaction in the situation on the same scale as HEARLI-Q. The specific EMA questions and response alternatives are outlined in [Supplementary Table S1] (available in online version only). The full HEARLI-Q questionnaire can be found in the [Supplementary Materials] of Lelic et al.[3]
The participants wore their own HAs throughout the trial period and all the ratings were based on their experiences with own devices.
#
Data Analysis
Based on 1,000 simulations of HEARLI-Q HA satisfaction data with mean = 3.4 and SD = 0.6, 21 participants would enable difference detection of 0.5 scale points with power of approximately 80% using mixed-effects linear regression. The mean and SD were taken from existing internal HEARLI-Q data.
Mixed-effects linear regression with random intercept for participant was conducted to analyze predictors of HA satisfaction ratings within each of the two methods. The dependent variable was HA satisfaction, and the covariates were frequency of occurrence, importance to hear well, and difficulty to hear.
For comparison between methods, the overall HEARLI-Q HA satisfaction rating was calculated by averaging the satisfaction ratings of individual situations, as described in Lelic et al.[3] HEARLI-Q HA satisfaction ratings were further averaged for each of the seven CoSS task categories. To ensure that the results of EMA and HEARLI-Q could be directly compared and analyzed in the same statistical model, EMA overall and CoSS task-specific HA satisfaction ratings were also calculated by averaging the ratings within each individual.
Repeated measures analysis of variance (ANOVA) was conducted to compare the overall and CoSS task-specific HA satisfaction ratings between HEARLI-Q and EMA. For those CoSS task categories where there were significant differences between HEARLI-Q and EMA, further analyses were done to understand what these differences can be attributed to. Specific details of these analyses are reported in results/[Supplementary Material] (available in online version only). Correlation between the HEARLI-Q and EMA HA satisfaction ratings was analyzed using Pearson's correlation.
The dependent variables subjected to ANOVA were visually inspected to ensure that they fit an approximate normal distribution. If not, the ladder of powers was applied to transform the data. The residuals for all the mixed-effects linear regression analyses were visually inspected for normality.
All the statistical analyses were done in Stata (v. 15, StataCorp, College Station, TX).
#
#
Results
Ecological Momentary Assessment Compliance and Location/Listening Task Distribution Across Reports
The total number of collected reports was 845. EMA compliance was on average 82% (SD: 15%; submitted reports/prompted reports). Participants responded on average within 21 minutes of the prompt ([Supplementary Fig. S1], available in online version only). Most of the reports were filled in home environment without conversation/focused listening activity, followed by one-on-one conversation and focused listening (media; [Supplementary Fig. S2], available in online version only).
#
Hearing Aid Satisfaction, Hearing Aid Use, and Predictors of Hearing Aid Satisfaction
The entire range of the HA satisfaction scale was used in both EMA and HEARLI-Q, although most responses were in the moderate-very satisfied range (87% responses in EMA and 72% of responses in HEARLI-Q). In HEARLI-Q, two participants indicated they do not wear their HAs when “Passive listening—Hearing sounds of nature.” In the EMA questionnaire, 82 reports attributed to 16 participants were submitted while they were not wearing their HAs—63 of those reports were in “situation without a conversation or focused listening,” eight reports were in “conversation with one person,” two reports were in “conversation with several people,” one report was in “conversation over phone or another technical device,” and two reports were in “focused listening (media)” situation.
Hearing difficulty was an independent predictor of HA satisfaction in both questionnaires, where satisfaction decreased with increased hearing difficulty ([Table 1]).
Abbreviations: CI, confidence interval; EMA, Ecological Momentary Assessment; HA, hearing aid; HEARLI-Q, hearing-related lifestyle questionnaire.
Note: Significant p-values are shown in bold.
#
Comparison of Hearing Aid Satisfaction Ratings
The overall HA satisfaction was rated 3.4 ± 0.6 in HEARLI-Q and 3.5 ± 0.4 in EMA (F 1,20 = 2.06, p = 0.17). The overall HA satisfaction ratings were positively correlated between the two questionnaires (r = 0.58, p < 0.01). Comparison of HA satisfaction ratings for individual CoSS task categories is presented in [Fig. 1]. Not all the participants contributed with ratings for “Speech Communication—Through Device” and “Focused Listening—Live Sounds” in either of the questionnaires, although the number of contributing participants was lower in EMA than HEARLI-Q for both CoSS task categories. Eighteen participants contributed with HA satisfaction ratings for “Focused Listening—Through Media” in EMA. HA satisfaction was significantly higher in EMA for “Speech Communication—One Person” (F 1,20 = 7.88, p = 0.01) and “Speech Communication—Several People” (F 1,20 = 5.38, p = 0.03). There was not a significant difference between the two questionnaires for “Speech Communication—Through Device” (F 1,9 = 0.04, p = 0.84), “Focused Listening—Live Sounds” (F 1,9 = 0.31, p = 0.59), “Focused Listening—Through Media” (F 1,17 = 3.85, p = 0.07), or “No Conversation/No Focused Listening” (F 1,20 = 3.93, p = 0.06).
#
Reasons for Higher Hearing Aid Satisfaction Ratings in Ecological Momentary Assessment
How do Occurrence and Hearing Difficulty Compare between Hearing-Related Lifestyle Questionnaire and Ecological Momentary Assessment?
[Fig. 2] shows occurrence, hearing difficulty, and HA satisfaction ratings for the two CoSS task categories where HA satisfaction was rated higher in EMA than HEARLI-Q. For both categories, speech communication in quiet in HEARLI-Q was rated similarly to EMA on occurrence, hearing difficulty, and HA satisfaction. On the other hand, when comparing the EMA ratings to HEARLI-Q ratings for situations where there was some background noise, occurrence was lower, hearing difficulty was higher and HA satisfaction was lower in HEARLI-Q (all ps < 0.001; [Supplementary Table S2] for detailed statistics; available in online version only). The situations in the “noise” category for HEARLI-Q were combined based on visual inspection of the data showing similar contrasts to EMA for the individual situations.
#
Are Difficult Listening Situations with Disturbing Background Noise Captured in Ecological Momentary Assessment?
To assess whether difficult listening situations with disturbing background noise are captured in EMA, EMA ratings for situations where there was no noise were compared with the ones where nondisturbing and disturbing background noise was present ([Fig. 3]). Hearing difficulty was rated significantly higher in situations where disturbing background noise was present (p < 0.001 for communication with one and p = 0.03 for communication with several), and HA satisfaction was lower in these situations (all ps < 0.001; see [Supplementary Table S3], available in online version only, for detailed statistics). Neither hearing difficulty nor HA satisfaction ratings in situations with disturbing background noise were significantly different from the ratings in “HEARLI-Q (noise)” presented in [Fig. 2] (detailed statistics can be seen in [Supplementary Table S4], available in online version only). However, although represented in EMA, these difficult and less satisfactory situations with disturbing background noise were reported by only 10 participants in “Speech Communication—One Person” and 12 participants in “Speech Communication—Several People.”
#
How do Hearing Aid Satisfaction Ratings Compare between Hearing-Related Lifestyle Questionnaire and Ecological Momentary Assessment when Only Frequently Occurring Situations are Analyzed?
When comparing only those situations where participants indicated the occurrence to be daily or weekly, there was not a significant difference in HA satisfaction ratings between the two questionnaires (β = −0.06 [95% confidence interval, CI: −0.24, 0.12], p = 0.53, mixed-effects linear regression). Hearing difficulty was, however, still lower in EMA (β = −0.26 [95% CI: −0.44, −0.07], p < 0.01, mixed-effects linear regression). These effects were mainly attributed to “Speech Communication—One Person,” “Speech Communication—Several People,” and “No Conversation/No Focused Listening.” Detailed statistics for individual CoSS task categories are presented in [Supplementary Table S5] (available in online version only). In [Fig. 4], it can be seen that hearing difficulty ratings were well below moderate.
#
How do Hearing Difficulty Ratings in Hearing-Related Lifestyle Questionnaire Compare between Frequently and Infrequently Occurring Situations?
HEARLI-Q hearing difficulty ratings for daily/weekly situations were significantly lower than HEARLI-Q hearing difficulty ratings when including all the listening situations or when comparing to listening situations that occur more seldom than on a weekly basis (frequent vs. all situations: β = 0.24 [95% CI: 0.06, 0.43], p = 0.01; frequent vs. infrequent situations: β = 0.54 [95% CI: 0.34, 0.74], p < 0.001, mixed-effects linear regression). See [Fig. 5] and [Supplementary Table S6] (available in online version only) for detailed statistics within individual CoSS task categories.
#
#
#
Discussion
The results of the current study indicate that the overall HA satisfaction ratings are similar between the retrospective HEARLI-Q and in situ EMA. On the other hand, when comparing the HA satisfaction ratings within individual CoSS task categories, it is evident that ratings are higher in EMA for in-person conversations.
Although EMA has clear advantages over retrospective questionnaires, it is also known that certain situations might be underrepresented, specifically those pertaining to social interactions and noisy environments.[9] [10] In fact, the distribution of listening environments where participants filled out EMA questionnaires points to easier/more familiar listening situations, such as in home environments while not participating in a conversation or focused listening, followed by one-on-one conversations and media listening. This is in line with auditory reality patterns shown in previous EMA studies.[9] [17] [18] [19] Moreover, the EMA data in the current work show that difficult situations with disturbing background noise, while represented, are seen less frequently. From EMA data alone, it is unclear whether those more challenging listening situations occurred less frequently, or participants did not fill out the questionnaire in those moments. However, when comparing the situations that only occur on daily or weekly basis, the HA satisfaction ratings between the two questionnaires were comparable. Additional analysis revealed that hearing difficulty ratings increased with decreased occurrence in HEARLI-Q. Hence, lower occurrence of difficult listening situations seems to be driving the discrepancy in HA satisfaction ratings between the HEARLI-Q and EMA.
Then, if more difficult listening situations occur less frequently, the likelihood of capturing them in time-limited EMA is lower. And, if it is true that people spend most time in easier/more familiar listening situations, and this is what is reflected in EMA, then EMA alone may well be sufficient to accurately model the dynamics of one's daily life. It has been shown that when a question pertains to a frequent behavior, participants are less likely to have detailed representation of each event in their memory, but rather these are grouped into one global representation without specific traits related to those events.[20] In this case, EMA has a clear advantage over HEARLI-Q because nuances of those frequent episodes would be detected. However, better understanding of the less frequent and more difficult listening situations could give relevant insights into where improvements in hearing care can be made. This in turn could lead to people spending time in situations that they otherwise avoid.
Two previous studies compared retrospective questionnaire responses about HA satisfaction and benefit to EMA. In a study investigating real-life benefit from “noise management” processing, Andersson et al.[21] compared SSQ-12 ratings with EMA. The authors found that EMA data were able to provide insights into more specific listening environments where participants experienced benefit. Wu et al.,[15] on the other hand, compared retrospective GHABP scores with those captured by EMA for two HAs. They found that GHABP-EMA was significantly different between the two HAs on all the subscales, in comparison to the retrospective questionnaire where only the satisfaction subscale showed a significant difference. Authors of both studies discuss their findings in light of EMA being more sensitive to detect differences in HA performance. We have shown that both HEARLI-Q and EMA are sensitive to detect effect of the applied intervention on HA satisfaction,[16] and as such, we do not consider one method to be superior to the other in this aspect but rather want to discuss their potential of complementing each other. For example, “Focused Listening—Live Sounds” was rated by 13 participants in EMA, whereas only one participant in HEARLI-Q indicated such situations to occur on daily or weekly basis. This is potentially a consequence of the trial period when the country was just starting to open again during coronavirus disease 2019 and the participants presumably started to attend more live events, which is reflected in EMA. When rating in HEARLI-Q, on the other hand, they reflected on the past year that did not have many such events. Similarly, all the 21 participants indicated that they experience “Speech Communication—Over Device” on daily/weekly basis in HEARLI-Q, but this situation was reported by only nine participants in EMA. This is a situation that typically does not last very long and hence is less likely to be captured by a 2-hour sampling scheme, or even if the prompt does align with the event, it may be considered inappropriate to fill out an EMA questionnaire. As such, it is not surprising that speech communication over device is underrepresented in EMA. In the two examples of live sounds and communication over device, we captured complementary information about participants' everyday lives that we would not have with either of the two questionnaires alone.
It is promising that HEARLI-Q and EMA paint a similar picture: HA satisfaction ratings are highly correlated between the two questionnaires and hearing difficulty is the key predictor of HA satisfaction in both questionnaires. It is also noteworthy that the two questionnaires offer balancing information. When rating HA satisfaction in HEARLI-Q, two participants indicated that they do not wear their HAs when “Passive listening—Hearing sounds of nature.” In EMA, conversely, there were many more situations where participants indicated they do not wear their HAs. In this aspect, EMA offers a more refined insight into day-to-day dynamics and how people use their devices. While it is likely that people do not always take their HAs off in those situations, with EMA, we can see how often people tend to take their HAs off and when. It is also true that participants are likely to give a more accurate rating of the current situation in EMA, as they are basing it on their experience right now and not relying on their memory. In contrast, when filling out the HEARLI-Q, one gets a description of the situation to assess but still must remember a real-life situation that matches the questionnaire item. That process might overemphasize negative experiences as those are more memorable and more salient than positive experiences when doing retrospective reports.[22] For example, we observed a borderline significant contrast in HA satisfaction ratings for “No Conversation/No Focused Listening” category, where there was a tendency of HA satisfaction to be higher in EMA. These types of situations without conversation/focused listening could include vacuum cleaning, washing dishes, operating machinery, etc. In other words, noisy situations where one would need to stop the activity to answer an EMA questionnaire. As such, these situations might be underrepresented. On the other hand, when filling out the HEARLI-Q, the participant is likely to think back to situations that stand out (e.g., of higher hearing difficulty). That is, we potentially have a slight overestimation of hearing difficulty in HEARLI-Q and a slight underrepresentation of hearing difficulty in EMA, with the truth lying somewhere in between.
It may not always be possible to administer both in situ and retrospective questionnaires to get insights from both angles. One way to overcome the limitation of EMA not capturing infrequently occurring or especially difficult situations can be overcome by asking participants to seek out and report in those types of situations that are relevant for the research questions, as done in Lelic et al.[11] or ask for EMA ratings based on a short retrospective period.[12] [13] [14] [15] HEARLI-Q can also be administered more frequently in a longitudinal fashion, such that participants do not need to think too far back, and this to an extent can reduce the amount of guessing and estimation related to long reference periods.[23] [24]
While providing meaningful insights into how retrospective and in situ satisfaction ratings compare, it should be noted that the results here most likely depend on the EMA sampling scheme. In the current study, we allowed participants to answer the EMA questionnaire anytime between two consecutive prompts and this potentially allowed for more selection bias than studies that may keep the prompt open for e.g., up to 15 minutes. That is, some situations may appear to “not occur” because they were inconvenient to report in the moment of the prompt and when the participant was able to answer, they were in a completely different situation. The sampling scheme employed in this study is also a likely reason for the high compliance rate. Further, the results of this study are based on experienced HA users. Although it has been previously shown that HEARLI-Q and aggregated EMA responses are stable over time[3] [25] in this population, expectedly HA satisfaction ratings may vary with time in new HA users as they get used to their devices.[26] Thus, we may see a different contrast in satisfaction ratings within and between the two questionnaires depending on the time of administration, relative to HA fitting, when considering new HA users.
#
Conclusion
The overall satisfaction ratings are similar between HEARLI-Q and EMA. The satisfaction ratings for one-on-one and group conversations are, on the other hand, higher in EMA. Lower occurrence of difficult listening situations seems to be the key driver of discrepancies in HA satisfaction ratings between the two questionnaires. The advantage of EMA is that it provides insight into an individual's day-to-day life and is less prone to memory bias than HEARLI-Q. HEARLI-Q, on the other hand, can capture those difficult situations that occur infrequently or are inconvenient to report in the moment. The high correlation between the two questionnaires is promising because it may not always be feasible to administer both types of questionnaires. In this case, HEARLI-Q can provide a reasonable assessment of HA satisfaction. Otherwise, when possible, administering HEARLI-Q and EMA in combination has the potential to give a complementary and more holistic view of HA satisfaction.
#
Disclaimer
Any mention of a product, service, or procedure in the Journal of the American Academy of Audiology does not constitute an endorsement of the product, service, or procedure by the American Academy of Audiology.
#
Conflict of Interest
All authors are employees of WS Audiology.
-
References
- 1 Wong LL, Hickson L, McPherson B. Hearing aid satisfaction: what does research from the past 20 years say?. Trends Amplif 2003; 7 (04) 117-161
- 2 Hipp L, Bünning M, Munnes S, Sauermann A. Problems and pitfalls of retrospective survey questions in COVID-19 studies. Surv Res Methods 2020; 14 (02) 109-114
- 3 Lelic D, Wolters F, Herrlin P, Smeds K. Assessment of hearing-related lifestyle based on the common sound scenarios framework. Am J Audiol 2022; 31 (04) 1299-1311
- 4 Wolters F, Smeds K, Schmidt E, Christensen EK, Norup C. Common sound scenarios: a context-driven categorization of everyday sound environments for application in hearing-device research. J Am Acad Audiol 2016; 27 (07) 527-540
- 5 Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc 2016; 9: 211-217
- 6 Bradburn NM, Rips LJ, Shevell SK. Answering autobiographical questions: the impact of memory and inference on surveys. Science 1987; 236 (4798) 157-161
- 7 Stone AA, Shiffman S. Capturing momentary, self-report data: a proposal for reporting guidelines. Ann Behav Med 2002; 24 (03) 236-243
- 8 Holube I, von Gablenz P, Bitzer J. Ecological momentary assessment in hearing research: current state, challenges, and future directions. Ear Hear 2020; 41 (Suppl. 01) 79S-90S
- 9 Schinkel-Bielefeld N, Kunz P, Zutz A, Buder B. Evaluation of hearing aids in everyday life using ecological momentary assessment: what situations are we missing?. Am J Audiol 2020; 29 (3S): 591-609
- 10 Wu Y-H, Xu J, Stangl E. et al. Why ecological momentary assessment surveys go incomplete: when it happens and how it impacts data. J Am Acad Audiol 2021; 32 (01) 16-26
- 11 Lelic D, Nielsen J, Parker D, Marchman Rønne F. Critical hearing experiences manifest differently across individuals: insights from hearing aid data captured in real-life moments. Int J Audiol 2022; 61 (05) 428-436
- 12 Janssens KAM, Bos EH, Rosmalen JGM, Wichers MC, Riese H. A qualitative approach to guide choices for designing a diary study. BMC Med Res Methodol 2018; 18 (01) 140
- 13 Beal DJ, Weiss HM. Methods of ecological momentary assessment in organizational research. Organ Res Methods 2003; 6 (04) 440-464
- 14 Galvez G, Turbin MB, Thielman EJ, Istvan JA, Andrews JA, Henry JA. Feasibility of ecological momentary assessment of hearing difficulties encountered by hearing aid users. Ear Hear 2012; 33 (04) 497-507
- 15 Wu YH, Stangl E, Chipara O, Gudjonsdottir A, Oleson J, Bentler R. Comparison of in-situ and retrospective self-reports on assessing hearing aid outcomes. J Am Acad Audiol 2020; 31 (10) 746-762
- 16 Lelic D, Parker D, Herrlin P, Wolters F, Smeds K. Focusing on positive listening experiences improves hearing aid outcomes in experienced hearing aid users. Int J Audiol 2023; 63 (06) 420-430
- 17 von Gablenz P, Kowalk U, Bitzer J, Meis M, Holube I. Individual hearing aid benefit in real life evaluated using ecological momentary assessment. Trends Hear 2021; 25: 2331216521990288
- 18 Jensen NS, Hau O, Lelic D, Herrlin P, Wolters F, Smeds K. Evaluation of auditory reality and hearing aids using an ecological momentary assessment (EMA) Approach. Proceedings of the 23rd International Congress on Acoustics. 2019
- 19 Smeds K, Gotowiec S, Wolters F, Herrlin P, Larsson J, Dahlquist M. Selecting scenarios for hearing-related laboratory testing. Ear Hear 2020; 41 (Suppl. 01) 20S-30S
- 20 Schwarz N, Oyserman D. Asking questions about behavior: cognition, communication, and questionnaire construction. Am J Eval 2001; 22 (02) 127-160
- 21 Andersson KE, Andersen LS, Christensen JH, Neher T. Assessing real-life benefit from hearing-aid noise management: SSQ12 questionnaire versus ecological momentary assessment with acoustic data-logging. Am J Audiol 2021; 30 (01) 93-104
- 22 Ganzach Y, Yaor E. The retrospective evaluation of positive and negative affect. Pers Soc Psychol Bull 2019; 45 (01) 93-104
- 23 Brown NR. Encoding, representing, and estimating event frequencies: a multiple strategy perspective. In: Sedlmeier P, Tilmann B. eds. Frequency Processing and Cognition. New York: Oxford University Press; 2002
- 24 Blair E, Burton S. Cognitive processes used by survey respondents to answer behavioral frequency questions. J Consum Res 1987; 14 (02) 280-288
- 25 Wu YH, Stangl E, Chipara O, Zhang X. Test-retest reliability of ecological momentary assessment in audiology research. J Am Acad Audiol 2020; 31 (08) 599-612
- 26 Vestergaard MD. Self-report outcome in new hearing-aid users: Longitudinal trends and relationships between subjective measures of benefit and satisfaction. Int J Audiol 2006; 45 (07) 382-392
Address for correspondence
Publication History
Received: 12 April 2023
Accepted: 19 November 2023
Accepted Manuscript online:
09 February 2024
Article published online:
28 November 2024
© 2024. American Academy of Audiology. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA
-
References
- 1 Wong LL, Hickson L, McPherson B. Hearing aid satisfaction: what does research from the past 20 years say?. Trends Amplif 2003; 7 (04) 117-161
- 2 Hipp L, Bünning M, Munnes S, Sauermann A. Problems and pitfalls of retrospective survey questions in COVID-19 studies. Surv Res Methods 2020; 14 (02) 109-114
- 3 Lelic D, Wolters F, Herrlin P, Smeds K. Assessment of hearing-related lifestyle based on the common sound scenarios framework. Am J Audiol 2022; 31 (04) 1299-1311
- 4 Wolters F, Smeds K, Schmidt E, Christensen EK, Norup C. Common sound scenarios: a context-driven categorization of everyday sound environments for application in hearing-device research. J Am Acad Audiol 2016; 27 (07) 527-540
- 5 Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc 2016; 9: 211-217
- 6 Bradburn NM, Rips LJ, Shevell SK. Answering autobiographical questions: the impact of memory and inference on surveys. Science 1987; 236 (4798) 157-161
- 7 Stone AA, Shiffman S. Capturing momentary, self-report data: a proposal for reporting guidelines. Ann Behav Med 2002; 24 (03) 236-243
- 8 Holube I, von Gablenz P, Bitzer J. Ecological momentary assessment in hearing research: current state, challenges, and future directions. Ear Hear 2020; 41 (Suppl. 01) 79S-90S
- 9 Schinkel-Bielefeld N, Kunz P, Zutz A, Buder B. Evaluation of hearing aids in everyday life using ecological momentary assessment: what situations are we missing?. Am J Audiol 2020; 29 (3S): 591-609
- 10 Wu Y-H, Xu J, Stangl E. et al. Why ecological momentary assessment surveys go incomplete: when it happens and how it impacts data. J Am Acad Audiol 2021; 32 (01) 16-26
- 11 Lelic D, Nielsen J, Parker D, Marchman Rønne F. Critical hearing experiences manifest differently across individuals: insights from hearing aid data captured in real-life moments. Int J Audiol 2022; 61 (05) 428-436
- 12 Janssens KAM, Bos EH, Rosmalen JGM, Wichers MC, Riese H. A qualitative approach to guide choices for designing a diary study. BMC Med Res Methodol 2018; 18 (01) 140
- 13 Beal DJ, Weiss HM. Methods of ecological momentary assessment in organizational research. Organ Res Methods 2003; 6 (04) 440-464
- 14 Galvez G, Turbin MB, Thielman EJ, Istvan JA, Andrews JA, Henry JA. Feasibility of ecological momentary assessment of hearing difficulties encountered by hearing aid users. Ear Hear 2012; 33 (04) 497-507
- 15 Wu YH, Stangl E, Chipara O, Gudjonsdottir A, Oleson J, Bentler R. Comparison of in-situ and retrospective self-reports on assessing hearing aid outcomes. J Am Acad Audiol 2020; 31 (10) 746-762
- 16 Lelic D, Parker D, Herrlin P, Wolters F, Smeds K. Focusing on positive listening experiences improves hearing aid outcomes in experienced hearing aid users. Int J Audiol 2023; 63 (06) 420-430
- 17 von Gablenz P, Kowalk U, Bitzer J, Meis M, Holube I. Individual hearing aid benefit in real life evaluated using ecological momentary assessment. Trends Hear 2021; 25: 2331216521990288
- 18 Jensen NS, Hau O, Lelic D, Herrlin P, Wolters F, Smeds K. Evaluation of auditory reality and hearing aids using an ecological momentary assessment (EMA) Approach. Proceedings of the 23rd International Congress on Acoustics. 2019
- 19 Smeds K, Gotowiec S, Wolters F, Herrlin P, Larsson J, Dahlquist M. Selecting scenarios for hearing-related laboratory testing. Ear Hear 2020; 41 (Suppl. 01) 20S-30S
- 20 Schwarz N, Oyserman D. Asking questions about behavior: cognition, communication, and questionnaire construction. Am J Eval 2001; 22 (02) 127-160
- 21 Andersson KE, Andersen LS, Christensen JH, Neher T. Assessing real-life benefit from hearing-aid noise management: SSQ12 questionnaire versus ecological momentary assessment with acoustic data-logging. Am J Audiol 2021; 30 (01) 93-104
- 22 Ganzach Y, Yaor E. The retrospective evaluation of positive and negative affect. Pers Soc Psychol Bull 2019; 45 (01) 93-104
- 23 Brown NR. Encoding, representing, and estimating event frequencies: a multiple strategy perspective. In: Sedlmeier P, Tilmann B. eds. Frequency Processing and Cognition. New York: Oxford University Press; 2002
- 24 Blair E, Burton S. Cognitive processes used by survey respondents to answer behavioral frequency questions. J Consum Res 1987; 14 (02) 280-288
- 25 Wu YH, Stangl E, Chipara O, Zhang X. Test-retest reliability of ecological momentary assessment in audiology research. J Am Acad Audiol 2020; 31 (08) 599-612
- 26 Vestergaard MD. Self-report outcome in new hearing-aid users: Longitudinal trends and relationships between subjective measures of benefit and satisfaction. Int J Audiol 2006; 45 (07) 382-392