Appl Clin Inform 2024; 15(02): 204-211
DOI: 10.1055/a-2247-9355
Research Article

Randomized Comparison of Electronic Health Record Alert Types in Eliciting Responses about Prognosis in Gynecologic Oncology Patients

1   Department of Medicine, Duke University Health System, Durham, North Carolina, United States
2   Duke Health Technology Solutions, Durham, North Carolina, United States
,
Rashaud Senior
2   Duke Health Technology Solutions, Durham, North Carolina, United States
3   Duke Primary Care, Duke University Health System, Durham, North Carolina, United States
,
Laura J. Havrilesky
4   Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Health System, Durham, North Carolina, United States
,
Jordan Buuck
2   Duke Health Technology Solutions, Durham, North Carolina, United States
,
David J. Casarett
5   Section of Palliative Care, Department of Medicine, Duke University Health System, Durham, North Carolina, United States
,
Salam Ibrahim
6   Duke Health Performance Services, Duke University Health System, Durham, North Carolina, United States
,
Brittany A. Davidson
4   Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, Duke University Health System, Durham, North Carolina, United States
› Author Affiliations
Funding None.


Abstract

Objectives To compare the ability of different electronic health record alert types to elicit responses from users caring for cancer patients benefiting from goals of care (GOC) conversations.

Methods A validated question asking if the user would be surprised by the patient's 6-month mortality was built as an Epic BestPractice Advisory (BPA) alert in three versions—(1) Required on Open chart (pop-up BPA), (2) Required on Close chart (navigator BPA), and (3) Optional Persistent (Storyboard BPA)—randomized using patient medical record number. Meaningful responses were defined as “Yes” or “No,” rather than deferral. Data were extracted over 6 months.

Results Alerts appeared for 685 patients during 1,786 outpatient encounters. Measuring encounters where a meaningful response was elicited, rates were highest for Required on Open (94.8% of encounters), compared with Required on Close (90.1%) and Optional Persistent (19.7%) (p < 0.001). Measuring individual alerts to which responses were given, they were most likely meaningful with Optional Persistent (98.3% of responses) and least likely with Required on Open (68.0%) (p < 0.001). Responses of “No,” suggesting poor prognosis and prompting GOC, were more likely with Optional Persistent (13.6%) and Required on Open (10.3%) than with Required on Close (7.0%) (p = 0.028).

Conclusion Required alerts had response rates almost five times higher than optional alerts. Timing of alerts affects rates of meaningful responses and possibly the response itself. The alert with the most meaningful responses was also associated with the most interruptions and deferral responses. Considering tradeoffs in these metrics is important in designing clinical decision support to maximize success.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects, and it was reviewed by the Duke Institutional Review Board.


Author Contributions

B.A.D., L.J.H., and D.J.C. conceived the study. J.B. performed the technical build within the electronic health record. R.C.M., R.S., and S.I. supported the data extraction and analyses. R.C.M. and B.A.D. did primary composition of the document, with edits by the other authors. All authors approved the manuscript.




Publication History

Received: 06 August 2023

Accepted: 16 January 2024

Accepted Manuscript online:
17 January 2024

Article published online:
13 March 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

 
  • References

  • 1 AHRQ Patient Safety Network (PSNet). Alert fatigue. Published September 7, 2019. Accessed October 20, 2023 at: https://psnet.ahrq.gov/primer/alert-fatigue
  • 2 Embi PJ, Leonard AC. Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. J Am Med Inform Assoc 2012; 19 (e1): e145-e148
  • 3 Murad DA, Tsugawa Y, Elashoff DA, Baldwin KM, Bell DS. Distinct components of alert fatigue in physicians' responses to a noninterruptive clinical decision support alert. J Am Med Inform Assoc 2022; 30 (01) 64-72
  • 4 Elias P, Peterson E, Wachter B, Ward C, Poon E, Navar AM. Evaluating the impact of interruptive alerts within a health system: use, response time, and cumulative time burden. Appl Clin Inform 2019; 10 (05) 909-917
  • 5 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36
  • 6 Samal L, Wu E, Aaron S. et al. Refining clinical phenotypes to improve clinical decision support and reduce alert fatigue: a feasibility study. Appl Clin Inform 2023; 14 (03) 528-537
  • 7 McCoy AB, Russo EM, Johnson KB. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J Am Med Inform Assoc 2022; 29 (06) 1050-1059
  • 8 Chaparro JD, Hussain C, Lee JA, Hehmeyer J, Nguyen M, Hoffman J. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform 2020; 11 (01) 46-58
  • 9 McDonald S, Lytle K, Musser RC, Shaw RJ. Transitioning a fall risk care plan interruptive alert to in-line support. J Inform Nurs 2022; 7 (03) 27-32
  • 10 Blecker S, Austrian JS, Horwitz LI. et al. Interrupting providers with clinical decision support to improve care for heart failure. Int J Med Inform 2019; 131: 103956
  • 11 Miller SD, Murphy Z, Gray JH. et al. Human-centered design of a clinical decision support for anemia screening in children with inflammatory bowel disease. Appl Clin Inform 2023; 14 (02) 345-353
  • 12 Wright AA, Zhang B, Ray A. et al. Associations between end-of-life discussions, patient mental health, medical care near death, and caregiver bereavement adjustment. JAMA 2008; 300 (14) 1665-1673
  • 13 Lopez-Acevedo M, Havrilesky LJ, Broadwater G. et al. Timing of end-of-life care discussion with performance on end-of-life quality indicators in ovarian cancer. Gynecol Oncol 2013; 130 (01) 156-161
  • 14 Davidson BA, Puechl AM, Watson CH. et al. Promoting timely goals of care conversations between gynecologic cancer patients at high-risk of death and their providers. Gynecol Oncol 2022; 164 (02) 288-294
  • 15 Downar J, Goldman R, Pinto R, Englesakis M, Adhikari NKJ. The “surprise question” for predicting death in seriously ill patients: a systematic review and meta-analysis. CMAJ 2017; 189 (13) E484-E493
  • 16 Rauh LA, Sullivan MW, Camacho F. et al. Validation of the surprise question in gynecologic oncology: a one-question screen to promote palliative care integration and advance care planning. Gynecol Oncol 2020; 157 (03) 754-758
  • 17 Foote J, Lopez-Acevedo M, Samsa G. et al. Predicting 6- and 12-month risk of mortality in patients with platinum-resistant advanced-stage ovarian cancer: prognostic model to guide palliative care referrals. Int J Gynecol Cancer 2018; 28 (02) 302-307
  • 18 Scheepers-Hoeks AMJ, Grouls RJ, Neef C, Ackerman EW, Korsten EH. Physicians' responses to clinical decision support on an intensive care unit–comparison of four different alerting methods. Artif Intell Med 2013; 59 (01) 33-38
  • 19 Rubins D, McCoy AB, Dutta S. et al. Real-time user feedback to support clinical decision support system improvement. Appl Clin Inform 2022; 13 (05) 1024-1032