Subscribe to RSS
DOI: 10.1055/a-2068-6940
Pseudorandomized Testing of a Discharge Medication Alert to Reduce Free-Text Prescribing
- Abstract
- Background and Significance
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Background Pseudorandomized testing can be applied to perform rigorous yet practical evaluations of clinical decision support tools. We apply this methodology to an interruptive alert aimed at reducing free-text prescriptions. Using free-text instead of structured computerized provider order entry elements can cause medication errors and inequity in care by bypassing medication-based clinical decision support tools and hindering automated translation of prescription instructions.
Objective The objective of this study is to evaluate the effectiveness of an interruptive alert at reducing free-text prescriptions via pseudorandomized testing using native electronic health records (EHR) functionality.
Methods Two versions of an EHR alert triggered when a provider attempted to sign a discharge free-text prescription. The visible version displayed an interruptive alert to the user, and a silent version triggered in the background, serving as a control. Providers were assigned to the visible and silent arms based on even/odd EHR provider IDs. The proportion of encounters with a free-text prescription was calculated across the groups. Alert trigger rates were compared in process control charts. Free-text prescriptions were analyzed to identify prescribing patterns.
Results Over the 28-week study period, 143 providers triggered 695 alerts (345 visible and 350 silent). The proportions of encounters with free-text prescriptions were 83% (266/320) and 90% (273/303) in the intervention and control groups, respectively (p = 0.01). For the active alert, median time to action was 31 seconds. Alert trigger rates between groups were similar over time. Ibuprofen, oxycodone, steroid tapers, and oncology-related prescriptions accounted for most free-text prescriptions. A majority of these prescriptions originated from user preference lists.
Conclusion An interruptive alert was associated with a modest reduction in free-text prescriptions. Furthermore, the majority of these prescriptions could have been reproduced using structured order entry fields. Targeting user preference lists shows promise for future intervention.
#
Keywords
clinical decision support - medication safety - randomized controlled trials - pediatrics - quality improvementBackground and Significance
Pseudorandomized testing offers a rigorous yet practical method for evaluating electronic health record (EHR)-based interventions.[1] [2] This is particularly useful in quality improvement, where there may be multiple simultaneous interventions addressing a variety of key drivers. In such cases, this framework can evaluate the effectiveness of specific interventions in the face of secular trends and other confounders. The ability to evaluate interventions in this way supports clinical decision support governance by allowing practitioners to identify ineffective tools and either improve upon or retire them. This is critical in optimizing EHRs and reducing alert fatigue.[3] Furthermore, such experiments can be used to realize a learning health system and contribute to generalizable knowledge about addressing specific operational issues.[4]
In the following study, we apply pseudorandomized testing methodology to an interruptive alert aimed at reducing free-text signature electronic prescriptions (free-text prescriptions henceforth). Free-text prescriptions are generated when ordering providers do not use discrete computerized provider order entry fields for dose, route, and frequency and instead compose the signature using free-text. This method of prescribing circumvents clinical decision support tools such as dose range checking, weight-based dosing, dispense/duration match checking, and infeasible pill splitting warnings. Absence of discretely documented dose, route, and frequency can also impact the ability of pharmacy software to automate prescription signature translation for patients who prefer a non-English language. As such, the free-text prescribing method carries an increased risk for medication error when used for medications that could be prescribed using discrete fields.
Studies indicate that this form of prescribing is prevalent, with 3 to 10% of discharge medications being prescribed using this method.[5] [6] Current research into free-text prescribing points to perceived efficiency gains and lack of native EHR functionality as key drivers.[7] While some prescriptions such as medical supplies, equipment, and short-acting insulins cannot be easily prescribed using discrete order entry fields, modern EHR systems now incorporate the ability to prescribe combination and taper dosing, which was a main limitation to discrete ordering in the past. At our institution, many free-text prescriptions also originate from user preference lists, which were ported over in free-text format during a transition from a prior EHR. As part of a system-wide effort to reduce medication errors and improve translation rates of discharge prescriptions, we sought to determine if an interruptive alert would effectively reduce this suboptimal method of prescribing. Furthermore, we aimed to test the feasibility of pseudorandomized testing methodology to evaluate EHR-based quality improvement interventions at our institution.
#
Methods
Setting and Context
This study was performed at a standalone academic children's hospital employing the Epic electronic medical records software (Epic Systems, Verona, Wisconsin, United States) between December 15, 2021, and June 30, 2022. Our goal was to apply pseudorandomized testing methodology to evaluate the efficacy of an interruptive alert to reduce free-text prescribing. Simultaneous to this experiment, there were other ongoing quality improvement interventions with the same aim of reducing free-text prescription rates including updates to system- and user-level settings and updates to the computerized order entry system (e.g., introducing new structured ordering options). However, updating user-level settings in our system, particularly user preferences lists, is costly and invasive because updates must be applied one user at a time and can only be done in the production environment. We hypothesized that an interruptive alert may be a more scalable solution by increasing provider awareness of new EHR functionality and prompting providers to update user-level preference lists on their own.
#
Intervention
Two versions of an interruptive EHR alert—one “visible” and one “silent”—were created that triggered when a provider attempted to sign a discharge medication using the free-text method ([Fig. 1]). We define a provider as any clinician capable of signing a medication order, which in this instance includes pharmacists, advanced practice providers, and physicians. Supplies, insulins, unlisted medications, and investigational drug prescription records were excluded from triggering the alert as these were deemed appropriate for use of the free-text method.[6] Providers were pseudorandomly assigned to either the visible alert (intervention) or silent alert (control) group based on their EHR user ID. Providers whose user ID ended in an even number received the visible alert, which included educational information on the safety limitations of free-text prescriptions and instructions detailing preferred prescribing methods. Two options are provided to the user: (1) acknowledge and override to submit the original prescription or (2) cancel and return to the prescription ordering activity. Providers whose user ID ended in an odd number and attempted to sign a free-text prescription received no visual feedback. Instead, a silent alert was logged in the background without an interruption to the user's workflow.
#
Measures and Analysis
In order to evaluate the effectiveness of the interruptive alert, EHR alert logs for both the visible and silent alerts were analyzed. These logs include the time of the alert, the action taken by the user, and information about the provider and the patient encounter. For the visible alert, an estimate is included on the amount of time the user spent on the alert. EHR reporting software was also used to generate a log of all free-text discharge medication prescriptions written during the observation period matching the same triggering logic with which the alert was programmed.
Summary statistics were calculated for the alert logs in two ways: at the level of the alert and at the level of the provider. This is because the same alert may trigger multiple times for a single provider, so it is useful to analyze at both levels to get a more holistic view of practice patterns and alert activity. For the visible alert, alert dwell time, defined as the time elapsed between when an alert was presented to the user and when it was dismissed, and user action taken were noted as well.[8]
Because the alert triggers during the discharge medication reconciliation process, we wished to study its effect on free-text prescribing at the encounter level. In order to account for multiple providers prescribing a discharge medication during the same patient encounter, a unique identifier for each combination of patient encounter and provider (“Encounter–Provider ID”) was created. The proportion of these provider–patient interactions during which an alert was triggered that ultimately resulted in a free-text prescription was calculated for both the intervention and the control arms. Statistical significance was tested using a two-tailed chi-squared test of independence with a p-value threshold of 0.05.
To study whether the interruptive alert had a persistent effect on prescribing behaviors, we analyzed the number of free-text prescription attempts (i.e., alert triggers) over the observation period between the two groups. Weekly alert triggering rates were calculated from the alert logs for both groups, plotted in c-charts using statistical process control rules from the “QI Macros” software toolkit,[9] and compared. This analysis was used in lieu of explicitly calculating whether users updated their preference lists, which was a desired intermediate outcome of the alert. Unfortunately, due to limitations in our EHR analytics, we did not have the ability to explicitly track user preference list changes over time. Free-text prescription logs were further analyzed to identify the top provider and medication characteristics that were associated with this method of prescribing.
#
#
Results
Over the 28 weeks of observation, 143 providers (75 in the intervention group, 68 in the control group) triggered 695 alerts (345 interruptive alerts and 350 silent alerts) across 623 encounters (320 in the intervention group, 303 in the control group). [Table 1] summarizes provider demographic information between the two groups. [Table 2] summarizes aggregate information for the alerts.
Abbreviation: OB/Gyn, obstetrician-gynecologist.
Abbreviation: OB/Gyn, obstetrician-gynecologist.
Provider-Level Analyses
Providers were equally stratified between the two groups as demonstrated in [Table 1]; however, alerts were not triggered equally among those providers. For example, there were 8 pharmacists in the intervention group and 7 in the control group. Meanwhile, in the intervention group, those 8 pharmacists triggered 81 visible alerts, and in the control group, the 7 pharmacists triggered only 17 silent alerts. Similarly, among obstetricians, the 9 obstetricians in the intervention group triggered 134 visible alerts, whereas the 9 obstetricians in the control group triggered 224 silent alerts.
#
Alert-Level Analyses
During the study period, the alert caused 345 workflow interruptions. Of these, 309 (90%) were overridden. The total dwell time caused by these workflow interruptions was 12,409 seconds (3.4 h), with a median dwell time of 31 seconds.
Among the 320 provider–patient encounters that triggered an alert in the visible alert (intervention) group, 266 (83%) resulted in one or more free-text prescriptions. Whereas, among the 303 provider–patient encounters that triggered an alert in the silent alert (control) group, 273 (90%) resulted in one or more free-text prescriptions ([Fig. 2]). This difference is statistically significant, p = 0.01.
Statistical process control charts depicting alert trigger rates for the two groups demonstrated an average alert firing rate of 12.3 (visible) and 11.9 (silent) alerts per week ([Fig. 3]). Neither process demonstrated special cause variation that resulted in adjustment to the center line.[10]
#
Analyses of Free-Text Prescriptions
A further look into the free-text prescriptions themselves is shown in [Table 3], which summarizes the top medications prescribed using the free-text method. Of the 180 ibuprofen prescriptions, 165 of these originated from the user preference list, and all of the prescription instructions could have been created using discrete elements. There were also 102 oxycodone prescriptions written using the free-text method, of which 91 originated from the user preference list, and 31 employed a dose range in the prescription signature. Additionally, we observed several oncology-related prescriptions, which utilize more specialized instructions, such as 52 prescriptions for lidocaine–prilocaine topical gel, which is used as an anesthetic for percutaneous vascular access, and 43 tacrolimus prescriptions. Finally, there were 22 prednisone or prednisolone prescriptions written during the study period, of which 20 included a steroid taper in the prescription instructions.
#
#
Discussion
Targeting Free-Text Prescriptions and Medication Errors
This work adds to a body of literature around medication safety, particularly the discharge medication reconciliation process. Transitions of care carry a high risk for medication and other medical errors.[11] [12] [13] [14] [15] [16] [17] While the medication reconciliation process has been shown to reduce these errors,[18] there is still room for improvement.[19] Our pseudorandomized experiment demonstrated that an interruptive alert was associated with a slight, but statistically significant, decrease in free-text discharge medication prescriptions. The use of free-text prescriptions is one of many sources for potential medication errors, because free-text prescribing bypasses clinical decision support tools that require use of structured prescribing fields, such as weight-based dose calculators and dosage checkers. This is especially important since in addition to the medication reconciliation process, the use of clinical decision support in computerized order entry has been consistently shown to reduce the rate of medication errors.[20] [21]
We observed that 90% of alerts in the intervention group were overridden. This matches prior experiences with medication safety alerts, where override rates were upwards of 90%.[22] [23] [24] Furthermore, the interruptive alert came with the cost of additional workflow interruptions and provider time (in the intervention group, about 12 workflow interruptions and 7 min of alert dwell time per week). These are important considerations given that workflow interruptions have been shown to spawn their own errors and failures to complete tasks.[25] [26] [27] Additionally, the increased alert burden also contributes to alert fatigue and other external costs.[3] [28]
Though the visible alert was associated with a reduction in signed free-text prescriptions compared with the control, the number of free-text prescription attempts (as represented by alert triggering rates in statistical process control charts) did not seem to differ between the groups. While weekly alert triggering rates did show a decreasing trend toward the end of the observation period for both groups, this did not result in a change in the center line. This pattern could not be temporally tied to any specific interventions made at our institution and may reflect random variation related to the fact that many alerts were generated by a small group of providers. Overall, these findings suggest that the interruptive alert on its own was not associated with the desired persistent impact on prescribing behavior (i.e., reducing free-text sig prescription attempts), since the visible alert triggering rates did not show a significant decrease compared with the silent alert.
Finally, this study adds to a nascent body of literature around free-text prescribing.[5] [6] We found that the majority of medications prescribed using this method were common medications that could easily be prescribed using the structured fields in the EHR and likely are perpetuated by poorly constructed user preference lists. Even in the case of dose ranges or medication tapers, there is native functionality to support these functions in the EHR without resorting to free-text prescribing. On the other hand, the use of free-text in prescribing more specialty medications, such as those relating to short-acting insulin, was expected and represents an area where structured computerized order entry fields fall short. Unfortunately, the use of a system-wide alert was not associated with the change in prescribing behavior to the extent that was desired. This suggests that addressing user-level settings such as user preference lists may be a more appropriate method for reducing this prescription behavior at our institution than an interruptive alert. Such an approach would have its own limitations, as it is a time-consuming and complex process.
#
The Use of Pseudorandomized Testing in Quality Improvement
This study illustrates the use of pseudorandomized experimentation in quality improvement and care delivery, a growing area of focus.[1] [2] [4] Prior examples have been used to compare two versions of clinical decision support tools, for example, interruptive alerts with different wording or interface choices. Along these lines, one can imagine that this process can also be used to iteratively improve clinical decision support tools.
In this study, pseudorandomized testing is used to compare an experimental alert to the control of no alert. We see great potential for this methodology to support clinical decision support governance,[29] [30] such trials can be used in future quality improvement projects to ensure that ineffective alerts are not perpetuated in the EHR by testing proposed interventions against a control,[31] particularly when multiple interventions are being trialed simultaneously
Limitations of the EHR software make it difficult to conduct a truly randomized study without customization or use of a third-party application. In quality improvement or more operationally focused projects, it may be infeasible to dedicate the resources required to set up such an evaluation. However, performing pseudorandomized experiments using native EHR functionality is a practical way to add a layer of rigor to alert evaluation and is something we intend to incorporate into our clinical decision support governance. Following this experience, we recently began a second pseudorandomized trial of a medication alert around nephrotoxic medications, which we are stratifying by patient medical record number even/odd. While at the moment, these experiments are being done ad hoc, we are in the process of developing a streamlined workflow to incorporate this method as part of our interruptive alert intake when feasible.
There are important lessons learned with regard to the pseudorandomization process from this study. Recent studies have demonstrated stratifying users by the clinic site and patient level.[2] [31] In the case of a provider-facing alert, such as ours, patient-level stratification may lead to contamination between the intervention and control groups if providers take care of patients randomized to both arms. This is alleviated by stratifying based on provider ID. At our institution, provider IDs are assigned sequentially based on account creation time and are expected to closely approximate a random process. However, there are other challenges with randomizing at the provider level. For example, there may be multiple providers caring for a single patient, which can also lead to contamination if there are patient- or encounter-level outcomes being evaluated. The combination of the target of the clinical decision support tool in question and the process measurements being evaluated should dictate the method of randomization to be used.
#
Limitations
An important limitation in this study is that while types of providers were evenly distributed between the groups (e.g., pharmacists, physicians, etc.), there were still niche workflows that were indivisible. For example, many oncology prescriptions fell under one provider in the intervention group. These represented a significant proportion of triggers for free-text prescriptions and thus cloud the ability to cleanly compare the two groups. Additionally, another limitation of this study design is that providers from both the intervention and control group may take care of the same patient. This causes contamination when measuring patient- or encounter-level metrics. In order to overcome this, we studied unique provider–patient encounters. The single-site nature of this study to some extent limits the generalizability of our findings. Along those lines, the specific methods employed here may not be directly implementable in other EHR software systems without customization.
The pseudorandomization method itself has inherent limitations that are important to note. For example, in this experiment, we stratify users based on the even/odd status of their user ID. However, as we look to incorporate this methodology into future experiments, this could result in groups of users that persist across multiple studies. As such, future experiments that stratify at the level of the provider may need to employ different stratification schemes to overcome this clustering. For example, stratifying providers whose user ID ends in 0–4 versus 5–9 or stratifying based on other digits in the user ID besides the final one. Additionally, stratification at various levels when appropriate (e.g., at the level of the patient, clinical site, etc.) will also help overcome this effect. We are hopeful that in the future this limitation will be alleviated by additional functionality in EHR software.
Finally, our ideal future state is to incorporate this methodology into our clinical decision support governance, perhaps at the stage of alert request intake. However, as of now, these experiments are being performed ad hoc and require a case-by-case consideration of the pseudorandomization process, duration of the experiment, and analysis plan, which limits its scalability. We expect to mature and standardize this process as we gain experience through these early iterations.
#
#
Conclusion
Pseudorandomized testing can be used to rigorously yet practically evaluate clinical decision support tools and EHR-based quality improvement interventions, which has implications for EHR maintenance. Furthermore, randomizing by even and odd provider IDs is a practical method for evaluating provider-facing interventions such as an interruptive alert. In the case of free-text prescriptions, an interruptive alert was associated with a modest reduction in this method of prescribing, but at the cost of increased alert burden and interruptions to workflow. The majority of free-text prescriptions originated from user preference lists and could have been reproduced using structured EHR elements.
#
Clinical Relevance Statement
This study evaluates the effectiveness of a clinical decision support tool to reduce medication errors. Generalizable learnings from the employed pseudorandomized testing methodology can also be used to directly support EHR maintenance and governance.
#
Multiple-Choice Questions
-
The use of free-text in the place of structured computerized order entry fields can lead to medication prescription errors through which of the following mechanisms?
-
Ordering the wrong kind of medication
-
Bypassing clinical decision support tools such as dosage checking
-
Prescribing for the wrong patient
-
Disrupting electronic prescription routing
Correct Answer: The correct answer is option b. Many clinical decision support tools around medication safety rely on structured data fields in computerized order entry. Examples of this include dosage checking and weight-based dosage calculators. Using free-text instead of the structured order entry fields bypasses these decision support tools and increases the risk for medication errors.
-
-
Which of the following is an unintended negative consequence of using an interruptive alert?
-
Failure to return to original task
-
Providing just in time teaching to end users
-
Preventing a high-risk error
-
Forcing user to acknowledge the alert
Correct Answer: The correct answer is option a. Studies have shown that one negative consequence of workflow interruptions, such as the case with using an interruptive alert, is failure of the user to return to their original task. Other negative consequences include user frustration from increased alert burden and alert fatigue. Interruptive alerts are most often used to prevent high-risk medical errors, such as prescribing a medication to which a patient has a known allergy. Interruptive alerts, and other types of clinical decision support, can be used to provide just in time education to users.
-
#
#
Conflict of Interest
None declared.
Acknowledgment
None
Protection of Human and Animal Subjects
The preceding work was performed as part of a quality improvement effort at our institution and does not qualify as human subjects research.
-
References
- 1 Horwitz LI, Kuznetsova M, Jones SA. Creating a learning health system through rapid-cycle, randomized testing. N Engl J Med 2019; 381 (12) 1175-1179
- 2 Austrian J, Mendoza F, Szerencsy A. et al. Applying A/B testing to clinical decision support: rapid randomized controlled trials. J Med Internet Res 2021; 23 (04) e16651
- 3 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36
- 4 Finkelstein A. A strategy for improving U.S. health care delivery - conducting more randomized, controlled trials. N Engl J Med 2020; 382 (16) 1485-1488
- 5 Zhou L, Mahoney LM, Shakurova A. et al. How many medication orders are entered through free-text in EHRs? A study on hypoglycemic agents. AMIA Annu Symp Proc 2012; 2012: 1079-1088
- 6 Morse KE, Chadwick WA, Paul W, Haaland W, Pageler NM, Tarrago R. Quantifying discharge medication reconciliation errors at 2 pediatric hospitals. Pediatr Qual Saf 2021; 6 (04) e436
- 7 Kandaswamy S, Pruitt Z, Kazi S. et al. Clinician perceptions on the use of free-text communication orders. Appl Clin Inform 2021; 12 (03) 484-494
- 8 McDaniel RB, Burlison JD, Baker DK. et al. Alert dwell time: introduction of a measure to evaluate interruptive clinical decision support alerts. J Am Med Inform Assoc 2016; 23 (e1): e138-e141
- 9 Arthur J. Control Chart White Paper [Internet]. 2021 Accessed September 1, 2022 at: https://www.qimacros.com/pdf/control-chart-whitepaper.pdf
- 10 Provost LP, Murray SK. The Health Care Data Guide: Learning from Data for Improvement. Hoboken, New Jersey: John Wiley & Sons; 2022: 656
- 11 Kwan JL, Lo L, Sampson M, Shojania KG. Medication reconciliation during transitions of care as a patient safety strategy: a systematic review. Ann Intern Med 2013; 158 (5 Pt 2): 397-403
- 12 Coleman EA, Berenson RA. Lost in transition: challenges and opportunities for improving the quality of transitional care. Ann Intern Med 2004; 141 (07) 533-536
- 13 Cornish PL, Knowles SR, Marchesano R. et al. Unintended medication discrepancies at the time of hospital admission. Arch Intern Med 2005; 165 (04) 424-429
- 14 Coleman EA, Smith JD, Raha D, Min SJ. Posthospital medication discrepancies: prevalence and contributing factors. Arch Intern Med 2005; 165 (16) 1842-1847
- 15 Bell CM, Brener SS, Gunraj N. et al. Association of ICU or hospital admission with unintentional discontinuation of medications for chronic diseases. JAMA 2011; 306 (08) 840-847
- 16 Huynh C, Wong ICK, Tomlin S. et al. Medication discrepancies at transitions in pediatrics: a review of the literature. Paediatr Drugs 2013; 15 (03) 203-215
- 17 Gattari TB, Krieger LN, Hu HM, Mychaliska KP. Medication discrepancies at pediatric hospital discharge. Hosp Pediatr 2015; 5 (08) 439-445
- 18 Hron JD, Manzi S, Dionne R. et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care 2015; 27 (04) 314-319
- 19 Stockton KR, Wickham ME, Lai S. et al. Incidence of clinically relevant medication errors in the era of electronically prepopulated medication reconciliation forms: a retrospective chart review. CMAJ Open 2017; 5 (02) E345-E353
- 20 Rinke ML, Bundy DG, Velasquez CA. et al. Interventions to reduce pediatric medication errors: a systematic review. Pediatrics 2014; 134 (02) 338-360
- 21 Marien S, Krug B, Spinewine A. Electronic tools to support medication reconciliation: a systematic review. J Am Med Inform Assoc 2017; 24 (01) 227-240
- 22 van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13 (02) 138-147
- 23 Nanji KC, Seger DL, Slight SP. et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc 2018; 25 (05) 476-481
- 24 Tolley CL, Slight SP, Husband AK, Watson N, Bates DW. Improving medication-related clinical decision support. Am J Health Syst Pharm 2018; 75 (04) 239-246
- 25 Westbrook JI, Coiera E, Dunsmuir WTM. et al. The impact of interruptions on clinical task completion. Qual Saf Health Care 2010; 19 (04) 284-289
- 26 Westbrook JI, Woods A, Rob MI, Dunsmuir WTM, Day RO. Association of interruptions with an increased risk and severity of medication administration errors. Arch Intern Med 2010; 170 (08) 683-690
- 27 Bonafide CP, Miller JM, Localio AR. et al. Association between mobile telephone interruptions and medication administration errors in a pediatric intensive care unit. JAMA Pediatr 2020; 174 (02) 162-169
- 28 Orenstein EW, Kandaswamy S, Muthu N. et al. Alert burden in pediatric hospitals: a cross-sectional analysis of six academic pediatric health systems using novel metrics. J Am Med Inform Assoc 2021; 28 (12) 2654-2660
- 29 Chaparro JD, Hussain C, Lee JA, Hehmeyer J, Nguyen M, Hoffman J. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform 2020; 11 (01) 46-58
- 30 Chaparro JD, Beus JM, Dziorny AC. et al. Clinical decision support stewardship: best practices and techniques to monitor and improve interruptive alerts. Appl Clin Inform 2022; 13 (03) 560-568
- 31 Downing NL, Rolnick J, Poole SF. et al. Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation. BMJ Qual Saf 2019; 28 (09) 762-768
Address for correspondence
Publication History
Received: 11 November 2022
Accepted: 03 April 2023
Accepted Manuscript online:
04 April 2023
Article published online:
14 June 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Horwitz LI, Kuznetsova M, Jones SA. Creating a learning health system through rapid-cycle, randomized testing. N Engl J Med 2019; 381 (12) 1175-1179
- 2 Austrian J, Mendoza F, Szerencsy A. et al. Applying A/B testing to clinical decision support: rapid randomized controlled trials. J Med Internet Res 2021; 23 (04) e16651
- 3 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36
- 4 Finkelstein A. A strategy for improving U.S. health care delivery - conducting more randomized, controlled trials. N Engl J Med 2020; 382 (16) 1485-1488
- 5 Zhou L, Mahoney LM, Shakurova A. et al. How many medication orders are entered through free-text in EHRs? A study on hypoglycemic agents. AMIA Annu Symp Proc 2012; 2012: 1079-1088
- 6 Morse KE, Chadwick WA, Paul W, Haaland W, Pageler NM, Tarrago R. Quantifying discharge medication reconciliation errors at 2 pediatric hospitals. Pediatr Qual Saf 2021; 6 (04) e436
- 7 Kandaswamy S, Pruitt Z, Kazi S. et al. Clinician perceptions on the use of free-text communication orders. Appl Clin Inform 2021; 12 (03) 484-494
- 8 McDaniel RB, Burlison JD, Baker DK. et al. Alert dwell time: introduction of a measure to evaluate interruptive clinical decision support alerts. J Am Med Inform Assoc 2016; 23 (e1): e138-e141
- 9 Arthur J. Control Chart White Paper [Internet]. 2021 Accessed September 1, 2022 at: https://www.qimacros.com/pdf/control-chart-whitepaper.pdf
- 10 Provost LP, Murray SK. The Health Care Data Guide: Learning from Data for Improvement. Hoboken, New Jersey: John Wiley & Sons; 2022: 656
- 11 Kwan JL, Lo L, Sampson M, Shojania KG. Medication reconciliation during transitions of care as a patient safety strategy: a systematic review. Ann Intern Med 2013; 158 (5 Pt 2): 397-403
- 12 Coleman EA, Berenson RA. Lost in transition: challenges and opportunities for improving the quality of transitional care. Ann Intern Med 2004; 141 (07) 533-536
- 13 Cornish PL, Knowles SR, Marchesano R. et al. Unintended medication discrepancies at the time of hospital admission. Arch Intern Med 2005; 165 (04) 424-429
- 14 Coleman EA, Smith JD, Raha D, Min SJ. Posthospital medication discrepancies: prevalence and contributing factors. Arch Intern Med 2005; 165 (16) 1842-1847
- 15 Bell CM, Brener SS, Gunraj N. et al. Association of ICU or hospital admission with unintentional discontinuation of medications for chronic diseases. JAMA 2011; 306 (08) 840-847
- 16 Huynh C, Wong ICK, Tomlin S. et al. Medication discrepancies at transitions in pediatrics: a review of the literature. Paediatr Drugs 2013; 15 (03) 203-215
- 17 Gattari TB, Krieger LN, Hu HM, Mychaliska KP. Medication discrepancies at pediatric hospital discharge. Hosp Pediatr 2015; 5 (08) 439-445
- 18 Hron JD, Manzi S, Dionne R. et al. Electronic medication reconciliation and medication errors. Int J Qual Health Care 2015; 27 (04) 314-319
- 19 Stockton KR, Wickham ME, Lai S. et al. Incidence of clinically relevant medication errors in the era of electronically prepopulated medication reconciliation forms: a retrospective chart review. CMAJ Open 2017; 5 (02) E345-E353
- 20 Rinke ML, Bundy DG, Velasquez CA. et al. Interventions to reduce pediatric medication errors: a systematic review. Pediatrics 2014; 134 (02) 338-360
- 21 Marien S, Krug B, Spinewine A. Electronic tools to support medication reconciliation: a systematic review. J Am Med Inform Assoc 2017; 24 (01) 227-240
- 22 van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13 (02) 138-147
- 23 Nanji KC, Seger DL, Slight SP. et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc 2018; 25 (05) 476-481
- 24 Tolley CL, Slight SP, Husband AK, Watson N, Bates DW. Improving medication-related clinical decision support. Am J Health Syst Pharm 2018; 75 (04) 239-246
- 25 Westbrook JI, Coiera E, Dunsmuir WTM. et al. The impact of interruptions on clinical task completion. Qual Saf Health Care 2010; 19 (04) 284-289
- 26 Westbrook JI, Woods A, Rob MI, Dunsmuir WTM, Day RO. Association of interruptions with an increased risk and severity of medication administration errors. Arch Intern Med 2010; 170 (08) 683-690
- 27 Bonafide CP, Miller JM, Localio AR. et al. Association between mobile telephone interruptions and medication administration errors in a pediatric intensive care unit. JAMA Pediatr 2020; 174 (02) 162-169
- 28 Orenstein EW, Kandaswamy S, Muthu N. et al. Alert burden in pediatric hospitals: a cross-sectional analysis of six academic pediatric health systems using novel metrics. J Am Med Inform Assoc 2021; 28 (12) 2654-2660
- 29 Chaparro JD, Hussain C, Lee JA, Hehmeyer J, Nguyen M, Hoffman J. Reducing interruptive alert burden using quality improvement methodology. Appl Clin Inform 2020; 11 (01) 46-58
- 30 Chaparro JD, Beus JM, Dziorny AC. et al. Clinical decision support stewardship: best practices and techniques to monitor and improve interruptive alerts. Appl Clin Inform 2022; 13 (03) 560-568
- 31 Downing NL, Rolnick J, Poole SF. et al. Electronic health record-based clinical decision support alert for severe sepsis: a randomised evaluation. BMJ Qual Saf 2019; 28 (09) 762-768