Appl Clin Inform 2022; 13(02): 468-485
DOI: 10.1055/s-0042-1748146
Review Article

Modulators Influencing Medication Alert Acceptance: An Explorative Review

Janina A. Bittmann
1   Cooperation Unit Clinical Pharmacy, Heidelberg University, Heidelberg, Germany
2   Department of Clinical Pharmacology and Pharmacoepidemiology, Heidelberg University Hospital, Heidelberg, Germany
,
Walter E. Haefeli
1   Cooperation Unit Clinical Pharmacy, Heidelberg University, Heidelberg, Germany
2   Department of Clinical Pharmacology and Pharmacoepidemiology, Heidelberg University Hospital, Heidelberg, Germany
,
Hanna M. Seidling
1   Cooperation Unit Clinical Pharmacy, Heidelberg University, Heidelberg, Germany
2   Department of Clinical Pharmacology and Pharmacoepidemiology, Heidelberg University Hospital, Heidelberg, Germany
› Author Affiliations
Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
 

Abstract

Objectives Clinical decision support systems (CDSSs) use alerts to enhance medication safety and reduce medication error rates. A major challenge of medication alerts is their low acceptance rate, limiting their potential benefit. A structured overview about modulators influencing alert acceptance is lacking. Therefore, we aimed to review and compile qualitative and quantitative modulators of alert acceptance and organize them in a comprehensive model.

Methods In accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guideline, a literature search in PubMed was started in February 2018 and continued until October 2021. From all included articles, qualitative and quantitative parameters and their impact on alert acceptance were extracted. Related parameters were then grouped into factors, allocated to superordinate determinants, and subsequently further allocated into five categories that were already known to influence alert acceptance.

Results Out of 539 articles, 60 were included. A total of 391 single parameters were extracted (e.g., patients' comorbidity) and grouped into 75 factors (e.g., comorbidity), and 25 determinants (e.g., complexity) were consequently assigned to the predefined five categories, i.e., CDSS, care provider, patient, setting, and involved drug. More than half of all factors were qualitatively assessed (n = 21) or quantitatively inconclusive (n = 19). Furthermore, 33 quantitative factors clearly influenced alert acceptance (positive correlation: e.g., alert type, patients' comorbidity; negative correlation: e.g., number of alerts per care provider, moment of alert display in the workflow). Two factors (alert frequency, laboratory value) showed contradictory effects, meaning that acceptance was significantly influenced both positively and negatively by these factors, depending on the study. Interventional studies have been performed for only 12 factors while all other factors were evaluated descriptively.

Conclusion This review compiles modulators of alert acceptance distinguished by being studied quantitatively or qualitatively and indicates their effect magnitude whenever possible. Additionally, it describes how further research should be designed to comprehensively quantify the effect of alert modulators.


#

Background and Significance

Medication alerts issued by clinical decision support systems (CDSSs) to health care professionals can reduce medication error rates and enhance medication safety.[1] [2] [3] [4] There are two major prerequisites for the success of CDSS. One is the appropriateness of the alert,[5] i.e., the adequate identification of potential harmful situations. The second being the subsequent acceptance of the alert by the recipient.[2] [6] [7] [8] [9] [10] When accepting a medication alert, the health care professional modifies or cancels the initial order in such a way that the alert no longer applies. In contrast, overriding of an alert is defined as continuing with the unchanged order despite the alert.[1]

It has been shown that in routine clinical practice 49 to 100 % of medication alerts are overridden.[11] [12] [13] [14] Particularly high override rates have been found for drug–drug interaction alerts (DDI; two studies identified override rates of 88 and 89 %, respectively) and drug–allergy interaction alerts (DAI; two studies identified override rates of 69 and 91 %, respectively).[15] [16] Overriding an alert frequently goes hand in hand with a low quality of presented warnings. Hence, it has often been discussed that an increase in specificity might tackle both deficits of CDSSs—their low acceptance and their low impact on patients' medication therapy.[8] [10] [11] [12] [14] [17] [18]

There already exist various generic recommendations and guidance on CDSS implementation and maintenance such as Campbell's framework of “The Five Rights of Clinical Decision Support.”[19] [20] [21] [22] [23] [24] On closer inspection, the reasons for accepting or overriding medication alerts seem to be diverse and complex. However, an overview about evidence on how medication alert acceptance might be increased overall is still lacking. While numerous studies anecdotally discuss general strategies to enhance alert acceptance, there is only scattered evidence about which modulators dependably have sizeable impact on the user interaction with an alert.


#

Objectives

The aim of this review is to compile an overview of dependable quantitative and qualitative modulators potentially influencing medication alert acceptance. Additional aims are to relate these modulators to each other by organizing them into a comprehensive model, as well as to elaborate their quantitative impact on alert acceptance whenever this was actually measured.


#

Methods

We searched the literature for modulators of medication alert acceptance and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline.[25]

Search Strategy

PubMed was searched using the following Medical Subject Headings in combination with associated free-text fields: (“Decision Support Systems, Clinical” [MeSH Terms] OR “Decision Support Systems, Management” [MeSH Terms] OR “Medical Order Entry Systems” [MeSH Terms] OR “alert*” [Text Word] OR “trigger*” [Text Word] OR “clinical decision support system” [Text Word]) AND (“alert fatigue, health personnel” [MeSH Terms] OR “alert fatigue” [Text Word] OR “alert acceptance” [Text Word] OR “alert rate” [Text Word] OR “health care professional” [Text Word]). This search strategy was pursued from February 2018 until October 2021, inclusively.


#

Eligibility Criteria

Available English or German language articles without any restrictions in date, publication status, or study design were considered. Studies were included that evaluated, described, or modified alerts displayed in electronic prescribing systems relating to risks in drug treatment including all steps of the medication process.

Excluded were studies that (1) focused on the impact of eHealth technologies,[26] [27] (2) did not consider the medication process in general (i.e., referring to prescribing, dispensing, administration, education, and monitoring[28]) but addressed for example the detection of septic patients,[29] (3) discussed alerting from external systems (like monitoring of vital signs [e.g., for oxygen saturation[30]] or smart pump handling), or (4) that did not focus on the assessment of alert acceptance but instead described, for example, the design of contextualized DDI algorithms.[31]


#

Study Selection

According to the inclusion and exclusion criteria, two reviewers (J.A.B., H.M.S.) independently screened all resulting titles and subsequently the abstracts and full texts of included articles. If no abstract was accessible, full texts were immediately read after a positive title screening. Discrepancies for inclusion or exclusion were discussed until consensus was reached. Following the principles of living systematic reviews, we included articles retrieved by the ongoing search strategy until October 2021, inclusively. Pertinent articles were grouped into articles assessing alert acceptance in a quantitative way (i.e., descriptive or interventional assessment of alerts) and articles exclusively reporting qualitative information about alert acceptance (e.g., focus group discussions or papers evaluating mail surveys theoretically highlighting factors that might influence or improve alert acceptance) ([Fig. 1]).

Zoom Image
Fig. 1 PRISMA flowchart describing the results of the literature search conducted to identify articles discussing modulators influencing alert acceptance (referred to Moher and coworkers[25]).

#

Bias Assessment

Applying a previously published methodology for bias assessment in the context of CDSSs by Olakotan and coworkers,[32] the risk of bias, i.e., critical appraisal, was independently assessed by two authors (J.A.B., H.M.S.) for each included article assessing acceptance in a quantitative way. Discrepancies were discussed until consensus was reached and articles were judged either as “high-quality” studies when more than two-thirds of the questions were fulfilled, as “acceptable” studies when between one- and two-thirds of questions were affirmed, or as “low-quality” studies when up to one-third of the questions were fulfilled.


#

Data Extraction and Analysis

From all full-text-screened articles we extracted bibliographic data, purpose, design (e.g., interventional qualitative or quantitative study vs. systematic review, retrospective, prospective, or observational study), the study setting (e.g., in-patient or primary care), methods, and variables measuring alert acceptance as well as the parameters themselves and their impact on alert acceptance when quantitatively assessed. For the studies describing quantitative modulators of alert acceptance, we also listed alert technique (e.g., interruptive vs. noninterruptive, active vs. passive), considered alert type (e.g., DDI alerts, DAI alerts), CDSS software characteristics, and the number of alerts measured as well as the alert acceptance rate, if mentioned. If univariate and multivariate analyses were performed, only variables assessed by multivariate analysis were included.


#

Categorization of Modulators of Alert Acceptance

Inspired by Campbell's framework of “The Five Rights of Clinical Decision Support”[19] and based on additional previously reported general topics influencing alert acceptance,[24] [33] [34] [35] [36] [37] [38] we initially assumed five main topics influencing alert acceptance. We used these selected topics as a starting point for an inductive composition of a self-developed theoretical model of modulators of alert acceptance which we enriched by quantitatively and qualitatively assessed modulators. These five main topics consisted of (1) the electronic system firing the alerts (summarizing Campbell's three rights “right information” in the “right intervention format” through the “right channel”),[19] [24] [33] [34] [37] [38] (2) the care provider (i.e., addressee of the alert, referring to Campbell's “right person”),[19] [34] [35] [38] (3) the patient whose prescription triggers the alert,[33] (4) the setting where the alert is fired (based among others on the “right channel” and the “right time in workflow”),[19] [33] [36] [37] [38] and (5) the concerned drug.[33] To this end, we allocated the modulators of alert acceptance identified by the literature search to these general topics. We introduced a content-based comprehensive structure by combining similar modulators extracted from the included articles (i.e., “parameters”) into “factors.” Related factors were then grouped into superordinate “determinants,” and subsequently matched to the predefined five “categories.” The allocation was conducted by two authors (J.A.B., H.M.S.); differences were discussed until congruence was reached.

In order to finally display which areas are well researched with an established relationship between parameter and alert acceptance, each extracted parameter was classified either as “qualitative parameter,” i.e., a parameter that was not quantitatively evaluated; as “quantitative, inconclusive parameter,” i.e., a quantitatively evaluated parameter without statistically significant impact on alert acceptance (including beneficial or detrimental trend); or as “quantitative parameter” with documented quantitative impact on alert acceptance. Grading of significance was conducted according to the authors' significance levels. When the authors did not mention significance levels, significance was assumed for p < 0.05. Subsequently, this assessment was repeated on “factor” level, i.e., we evaluated whether the same or similar parameters of alert acceptance yielded consistent results in different studies. Hence, each factor was classified as follows: “qualitative factor”—when only qualitative parameters were allocated to this factor, “inconclusive factor”—when quantitative, inconclusive parameters were allocated, “quantitative, inconsistent factor”—when a factor yielded significant but ambiguous results in single parameters, or as “quantitative factor”—when all single parameters showed an increasing or decreasing significant effect on alert acceptance.


#
#

Results

Literature Search

The search strategy revealed 539 articles. After the removal of 110 duplicates and the exclusion of 276 articles following title and abstract screening, a total of 153 full texts were read. In compliance with the inclusion and exclusion criteria, 31 articles reporting quantitative and qualitative parameters[13] [33] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] and 29 articles reporting exclusively qualitative parameters of alert acceptance[1] [6] [12] [17] [32] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] were included in the analysis.

A total of 29 of the 31 included articles assessing acceptance in a quantitative way were of “high-quality” (94 %) and two of “acceptable” quality (6 %) when considering each study individually, and thus, they were all included in the final analysis. However, it is important to add that despite the high internal validity, the methodology between studies was often not comparable. Moreover, studies often focused on single institutions (20 articles assessed one institution, compared with 11 studies merging data from several institutions) or only singular alert types (18 articles assessed one alert type and 13 articles assessed more than one alert type, with only one article assessing nine different alert types). Hence, each study stands on its own and illuminates the topic with a very specific focus, making it difficult to derive an overarching picture.


#

Modulators of Alert Acceptance

From all included articles, 391 single parameters were extracted and grouped into 75 factors. These factors were then united in superordinate 25 determinants and all determinants could be assigned to the initial five categories, confirming that the predefined model was comprehensive. For example, the parameters considering “patients' comedication” as modulators of alert acceptance were extracted from four different studies[39] [48] [54] [57] and allocated to the factor “comedication,” which was assigned to the determinant “complexity” belonging to the category “patient.”

Overall, 334 parameters (n = 268 “qualitative parameters” and n = 66 “quantitative, inconclusive parameters”) were grouped into 21 “qualitative factors” without any quantitative assessment, and into 19 quantitatively assessed but “inconclusive factors.” The remaining 57 “quantitative parameters” were aggregated to two “quantitative, inconsistent factors” (i.e., alert frequency, laboratory value) showing contradicting effects on alert acceptance and to 33 “quantitative, consistent factors” with a clear impact on acceptance. Twenty-six of these latter factors fostered alert acceptance and the remaining seven factors reduced it. In the category “clinical decision support system” with most of all extracted factors (n = 32), only 40 % (n = 10) thereof were quantitatively investigated and showed significant impact on alert acceptance. More than half of all factors in this category (n = 18) were only qualitatively mentioned in the literature without any approach for quantitative assessment. In the category “care provider,” 50 % (12 out of 24 factors) of the factors altered alert acceptance significantly, compared with approximately 56 % in the category “setting” (5 out of 9 factors), approximately 71 % in the category “patient” (5 out of 7 factors), and 100 % (3 out of 3 factors) in the category “involved drug.”

Each factor was typically mentioned in about three studies. The comprehensive overview of all modulators of alert acceptance is shown in [Fig. 2] and in [Table S1] in the [Supplementary Material] (available in the online version).

Zoom Image
Fig. 2 Overview of all modulators of alert acceptance classified by categories, determinants, and factors. Categories and determinants are ordered by the total number of parameters in parentheses, quantitative factors are shown on the left, and qualitative factors on the right (green filled squares: quantitative, consistent factor showing positive correlation with alert acceptance; red filled squares: quantitative, consistent factor showing negative correlation with alert acceptance; yellow filled squares: quantitative, inconsistent factor showing positive and negative correlation with alert acceptance; gray filled squares: quantitative, inconclusive factors without significant positive or negative assessment of alert acceptance; white squares: qualitative factors without any quantitative assessment of alert acceptance; number of parameters in parentheses: number labeled with “*” presents the number of parameters with statistically significant effect on alert acceptance; ↑: positive correlation with alert acceptance; ↓: negative correlation with alert acceptance; ↔: no significant correlation with alert acceptance; numbers without “*” describe the number of quantitative, inconclusive (↔), and qualitative parameters within this factor); #several modulators were grouped to one single intervention;[64] lab: laboratory.

The large majority of the 35 quantitative factors (n = 22) were studied once, whereas nine factors were investigated twice (i.e., alert display,[33] [64] filtering, clustering or deactivation of alerts,[46] [61] interruptive alerts,[45] [56] alert frequency,[33] [54] inclusion of patient-specific context factors,[60] [64] care provider's professional status,[62] [63] laboratory value,[49] [57] weekday,[39] [41] and drug triggering the alert[57] [62]). Two factors were analyzed three times (i.e., tiering of alerts according to severity[33] [45] [64] and care provider's department[39] [57] [62]), one factor four times in two different articles (i.e., assessment of alert relevance by care provider[50] [52]), and one factor was studied seven times in seven independent articles (i.e., alert type[39] [40] [41] [48] [54] [62] [65]).

The majority of the included studies were retrospective, descriptive assessments and only nine studies (reflecting 12 factors) reported on the effects of prospective interventions, i.e., whether the alert acceptance improved after specific changes were implemented ([Table 1]).

Table 1

Quantitative parameters reporting effects on alert acceptance

CATEGORY

FACTOR

Quantitative effect on alert acceptance

Study characteristics

Setting characteristics Study period

CDSS software characteristics

Alert type

Total number of alerts Acceptance rate

Ref.

Effect size OR (95 % CI), IRR (95 % CI), RR (95 % CI), r, p-value

CLINICAL DECISION SUPPORT SYSTEM

Tiering of alerts according to severity

↑[a]

Alert acceptance increased the higher the severity level of the alert.

OR [1.74 (1.63–1.86)]

Retrospective study

Primary care and in-patient setting

12 months

Not specified

DDI alerts

50,788 alerts

18.3–46.7 % according to site

[33]

↑[a]

Alert acceptance rate increased after stratifying alerts by severity level.

p < 0.001

Pre–post intervention study

In-patient setting

14 months

“Clinical Workstation” (in-house)

DDI alerts

Between 90 and 200

Between 2.0 and 52.4 %

[45]

↑[a]

Acceptance rate was higher at prescription and administration level for a complex intervention including adjustments in tiering of alerts according to severity.[b]

RR [4.02 (3.17–5.10)]

RR [1.16 (1.08–1.25)]

Pre–post intervention study

In-patient setting

pre: 8 months

post: 8 months

Primuz

DDI alerts

3,717 alerts

(pre: 1,087 alerts, post: 2,630 alerts)

25.5 % at prescription level, 54.4 % at administration level

[64] [b]

Alert display

↑[a]

Alert acceptance was higher the better alerts are displayed, e.g., according to alert visibility like color or shape.

OR [4.75 (3.87–5.84)]

Retrospective study

Primary care and in-patient setting

12 months

Not specified

DDI alerts

50,788 alerts

18.3–46.7 % according to site

[33]

↑[a]

Acceptance rate was higher at prescription and administration level for a complex intervention including adjustments in the alert display.[b]

RR [4.02 (3.17–5.10)]

RR [1.16 (1.08–1.25)]

Pre–post intervention study

In-patient setting

pre: 8 months

post: 8 months

Primuz

DDI alerts

3,717 alerts

(pre: 1,087 alerts, post: 2,630 alerts)

25.5 % at prescription level, 54.4 % at administration level

[64] [b]

Integration of laboratory data

↓[a]

Alert acceptance was lower when alerts displayed potassium levels associated with hyperkalemia.

p < 0.01

Randomized controlled trial

Primary care

6 months

Gopher order entry system (CPOE)

DDI alerts

2,140 alerts

16.4 % (intervention and control group)

[49]

Filtering, clustering, or deactivation of alerts

↑[a]

Deactivation of clinically relevant alerts increased acceptance rate for pharmacists.

p < 0.001

Cross-sectional intervention study

In-patient setting

36 months

Medi-Span

DDI alerts

2,391,880 alerts

4.9 % (baseline), 15.6 % (postinterventional)

[46]

↑[a]

Filtering and suppressing of “intermediate” DDI alerts increased acceptance of DDI alerts.

+ 2.0 % (adjusted) (1.4–2.4)

Retrospective pre–post study

In-patient setting

10 months

First DataBank

DDI alerts

19,217 alerts, 4,461 alerts

2.1 % (baseline)

3.9 % (postinterventional)

[61]

Tailoring of alerts

↑[a]

Acceptance rate was higher at prescription and administration level for a complex intervention including adjustments in tailoring of alerts.[b]

RR [4.02 (3.17–5.10)]

RR [1.16 (1.08–1.25)]

Pre–post intervention study

In-patient setting

pre: 8 months

post: 8 months

Primuz

DDI alerts

3,717 alerts

(pre: 1,087 alerts, post: 2,630 alerts)

25.5% at prescription level, 54.4 % at administration level

[64] [b]

Creation of individual screening intervals for drugs in the checking

↑[a]

Acceptance rate was higher at prescription and administration level for a complex intervention including adjustments in the creation of individual screening intervals for drugs in the checking.[b]

RR [4.02 (3.17–5.10)]

RR [1.16 (1.08–1.25)]

Pre–post intervention study

In-patient setting

pre: 8 months

post: 8 months

Primuz

DDI alerts

3,717 alerts

(pre: 1,087 alerts, post: 2,630 alerts)

25.5% at prescription level, 54.4 % at administration level

[64] [b]

Interruptive alerts

↑[a]

Alert acceptance rate increased for interruptive alerts.

p < 0.001

Pre–post intervention study

In-patient setting

14 months

“Clinical Workstation” (in-house)

DDI alerts

Between 90 and 200

Between 2.0 and 52.4 %

[45]

↑[a]

Alert acceptance rate increased when a hard stop is implemented for “chart closure.”

p = 0.013

Pre–post intervention study

Primary care

16 months

Epic®, MYMEDS, CareConnect

Best practice advisory alerts

179 alerts

9.5 %

[56]

Alert type

↑[a]

Alert acceptance varied by alert type. The highest acceptance was seen for dose alerts and lowest for duplicate therapy alerts and major DDI alerts.

p < 0.001

Retrospective study

In-patient setting

12 months

SafeRx® CDSS

Dosing alerts, inadequate dose for reduced renal function alerts, DDI alerts, duplicate therapy alerts

145,103 alerts

5.3 %

[39]

↑[a]

Alert acceptance varied by alert type. Duplicate medication alerts were more often accepted than DDI alerts and DAI alerts were most often overridden.

p < 0.0001

Cross-sectional study

In-patient setting

36 months

Brigham Integrated Clinical Information System (in-house)

DDI and DAI alerts, duplicate drug alerts

213,253 alerts

73.3 %

[40]

↑[a]

Alert acceptance varied according to alert type.

Acceptance of dose and DDI alerts: OR [2.09 (2.03–2.15)], acceptance of DAI alerts: OR [2.36 (2.29–2.43)]

Retrospective study

Primary care and in-patient setting

24 months

Epic® Care

Dose alerts, DDI alerts, DAI alerts

517,286 alerts

12.8 %

[41]

↑[a]

Alert acceptance varied according to alert type. DDI alerts were less often accepted than DAI alerts.

p < 0.001

▴Retrospective study

In-patient setting

4 days

Cerner

DDI alerts, DAI alerts

2,455 alerts

7.1 %

[48]

↑[a]

Alert acceptance varied according to alert type. DDI alerts were accepted less often than DAI alerts.

p < 0.001

Retrospective study

Primary care

9 months

Cerner

DDI alerts

229,663 alerts

9.2 %

[54]

↑[a]

Acceptance rate varied by the alert type. Alert acceptance was higher for age alerts, allergy alerts, gender alerts, and pregnancy alerts.

OR [0.8 (0.71–0.90)]

OR [0.54 (0.46–0.62)]

OR [0.43 (0.33–0.56)]

OR [0.72 (0.64–0.81)]

Retrospective study

In-patient setting

18 months

DARWIN's CDSS

Age, DAI, disease, duplication, gender, lactation, pregnancy, route, DDI, dosage alerts

102,887 alerts

36.23 %

[62]

↑[a]

Acceptance rate of interruptive alerts differed significantly depending on the alert type, reaching 85.7 % for DDI alerts, 65.3 % for contraindicated drugs in hyperkalemia, and 25.1 % for potentially inappropriate medication for patients >65 years.

p < 0.0001

Retrospective study

In-patient setting

53 months

AiDKlinik ®

Contraindicated DDI with simvastatin, potentially inappropriate medication for patients >65 years, contraindicated drugs in hyperkalemia

468 prescribing sessions with at least one interruptive alert

57.5 %

[65]

Alert frequency

↑[a]

Alert acceptance was higher for repeated alerts.

OR [1.30 (1.23–1.38)]

Retrospective study

Primary care and In-patient setting

12 months

Not specified

DDI alerts

50,788 alerts

18.3–46.7% according to site

[33]

↓[a]

Alert acceptance decreased for repeated alerts of the same medication and patient.

OR [0.03 (0.03–0.03)]

Retrospective study

Primary care

9 months

Cerner

DDI alerts

229,663 alerts

9.2%

[54]

Inclusion of patient-specific context factors

↑[a]

Alert acceptance increased when recent potassium laboratory values determined alert severity level of the DDI alert by filtering informative alerts which reduced alert burden.

p < 0.001

Pre–post intervention study

In-patient setting

24 months

Primuz

Potassium-increasing DDI alerts

1,461 alerts,

89 alerts

24.4 % (baseline), 87.6 % (postintervention)

[60]

↑[a]

Acceptance rate was higher at prescription and administration level for a complex intervention including the inclusion of patient-specific context factors.[b]

RR [4.02 (3.17–5.10)]

RR [1.16 (1.08–1.25)]

Pre–post intervention study

In-patient setting

pre: 8 months

post: 8 months

Primuz

DDI alerts

3,717 alerts

(pre: 1,087 alerts, post: 2,630 alerts)

25.5 % at prescription level, 54.4 % at administration level

[64] [b]

CARE PROVIDER

Assessment of alert relevance by care providers

↑[a]

Care providers' opinion of the helpfulness of CDSS was positively correlated with alert acceptance.

r = 0.304,

p = 0.003

Email survey

Primary care

12 months

Not specified

18,044 alerts

38.1 %

[50]

↑[a]

Care providers' opinion of the accuracy of CDSS was positively correlated with alert acceptance.

r = 0.338,

p = 0.001

Email survey

Primary care

12 months

Not specified

18,044 alerts

38.1 %

[50]

↑[a]

Care providers' self-reported subjective opinion about their acceptance rate was positively correlated with their real acceptance rate.

r = 0.270,

p = 0.008

Email survey

Primary care

12 months

Not specified

18,044 alerts

38.1%

[50]

↑[a]

Alert acceptance increased when the alert was considered valuable.

OR [3.18 (2.16–4.20)]

Expert panel review

Primary care

10 months

Cerner

DDI alerts

229,663 alerts

8.8 % (baseline)

[52]

(Subjective) assessment of scientific evidence by care providers

↑[a]

Alert acceptance increased when care providers assessed stronger scientific evidence for the interaction.

OR [2.34 (1.08–3.60)]

Expert panel review

Primary care

10 months

Cerner

DDI alerts

229,663 alerts

8.8 % (baseline)

[52]

Number of alerts per care provider

↓[a]

Overall acceptance rate was lower with an increasing number of alerts.

p < 0.001

Before-after study

Primary care

12 months

Longitudinal Medical Record and Epic® Care

DDI alerts

Not specified

5.0–100 % according to tier and CDSS

[55]

Number of alerts per order

↓[a]

Alert acceptance decreased for a higher number of interruptive alerts per order.

p < 0.001

Before–after study

Primary care

12 months

Longitudinal Medical Record and Epic® Care

DDI alerts

Not specified

5.0–100 % according to tier and CDSS

[55]

Number of prescriptions per care provider

↓[a]

Alert acceptance decreased with an increasing number of written electronic prescriptions by the clinician.

OR from [0.65 (0.56–0.77)] to [0.83 (0.74–0.93)]

Retrospective study

Primary care

9 months

Cerner

DDI alerts

229,663 alerts

9.2 %

[54]

Professional status

↑[a]

Alert acceptance was higher for the fellow and faculty group than for residents.

OR [0.9 (0.86–0.94)] and OR [0.73 (0.66–0.81)]

Retrospective study

In-patient setting

18 months

DARWIN's CDSS

Age, DAI, disease, duplication, gender, lactation, pregnancy, route, DDI, dosage alerts

102,887 alerts

36.23 %

[62]

↑[a]

Alert acceptance was higher for residents than for other health professional categories like assistant consultants, consultants, fellows or pharmacists.

p < 0.001

Retrospective study

In-patient setting

9 days

Cerner

Dose range alerts

3,000 alerts

4 %

[63]

Profession

↑[a]

Alert acceptance was higher for nurses than for physicians.

IRR [4.56 (1.72–12.06)]

Retrospective cohort study

Primary care

42 months

Epic® Care

DDI and DAI alerts

326,203 alerts

Less than 1 %

[13]

Physicians' year of residency

↓[a]

Alert salience decreased for postgraduate year 3 residents compared with postgraduate year 1 and 2 residents.

p < 0.001

Cross-institutional retrospective study

In-patient setting

3 months

Epic® and First DataBank, Epic® and Medi-Span

Duplicate medications, drug interaction and compatibility issues, allergies, misadministration in terms of dosage and frequency

52,624 alerts

10.6 %

[66]

Department

↑[a]

Alert acceptance was higher for one of two Internal Medicine departments.

IRR [1.91 (1.24–2.90)]

Prospective study

In-patient setting

1.5 months

SafeRx® CDSS

Dosing alerts, inadequate dose for reduced renal function alerts, DDI alerts, duplicate therapy alerts

3,064 alerts

4.2 %

[39]

↑[a]

Alert acceptance was higher in the intensive care unit than in the medical care unit.

p < 0.0001

Retrospective study

In-patient setting

3 months

Not specified

Renal dose adjustment alerts

2,341 alerts

Not specified

[57]

↑[a]

Alert acceptance was higher in the surgical department than in the emergency department.

OR [0.90 (0.85–0.94)]

Retrospective study

In-patient setting

18 months

DARWIN's CDSS

Age, DAI, disease, duplication, gender, lactation, pregnancy, route, DDI, dosage alerts

102,887 alerts

36.23 %

[62]

Experience in using EHR or electronic prescribing

↑[a]

Alert acceptance increased for clinicians with longer experience in electronic prescribing.

OR from [1.15 (1.06–1.24)] to [1.38 (1.24–1.53)]

Retrospective study

Primary care

9 months

Cerner

DDI alerts

229,663 alerts

9.2 %

[54]

Time to resolve the alert

↑[a]

Think time was longer when alerts were accepted.

p < 0.001

Interventional study

In-patient setting

12 months

Cerner

DDI alerts

Not specified

[43]

Quality of medical school

↑[a]

Alert acceptance was higher for care providers graduating from a Top 25 medical school.

r = 0.198,

p = 0.009

Email survey

Primary care

12 months

Not specified

18,044 alerts

38.1 %

[50]

PATIENT

Other patient characteristics

↑[a]

Alert acceptance was increased in in-patient setting in contrast to outpatient setting.

OR [2.63 (2.32–2.97)]

Retrospective study

Primary care and in-patient setting

12 months

Not specified

DDI alerts

50,788 alerts

18.3–46.7 % according to site

[33]

Sex

↑[a]

Alert acceptance was higher for male than for female patients.

p = 0.002, OR [0.758 (0.638–0.900)]

Retrospective study

In-patient setting

3 months

Not specified

Renal dose adjustment alerts

2,341 alerts

Not specified

[57]

Comorbidity

↑[a]

Acceptance rate varied by patients' comorbidity. Acceptance rate was higher in patients with noncardiogenic chest pain, dyspnea, and nausea or vomiting.

OR [0.84 (0.725–0.98)]

OR [0.93 (0.88–0.98)]

OR [0.7 (0.61–0.80)]

Retrospective study

In-patient setting

18 months

DARWIN's CDSS

Age, DAI, disease, duplication, gender, lactation, pregnancy, route, DDI, dosage alerts

102,887 alerts

36.23 %

[62]

Risk factors

↑[a]

Alert acceptance increased with the increase of patients' severity score.

OR [0.82 (0.74–0.91)]

OR [0.89 (0.85–0.94)]

Retrospective study

In-patient setting

18 months

DARWIN's CDSS

Age, DAI, disease, duplication, gender, lactation, pregnancy, route, DDI, dosage alerts

102,887 alerts

36.23 %

[62]

Laboratory value

↓[a]

Alert acceptance decreased when low potassium levels (< 3.9 mEq/L) of patients were displayed.

p < 0.01

Randomized controlled trial

Primary care

6 months

Gopher order entry system (CPOE)

DDI alerts

2,140 alerts

16.4 % (intervention and control group)

[49]

↑[a]

Alert acceptance was higher for lower eGFR.

p < 0.0001

Retrospective study

In-patient setting

3 months

Not specified

Renal dose adjustment alerts

2,341 alerts

Not specified

[57]

SETTING

Weekday

↑[a]

Acceptance rate was slightly higher for prescriptions written on weekends.

OR [1.49 (1.01–2.19)]

Prospective study

In-patient setting

1.5 months

SafeRx® CDSS

Dosing alerts, inadequate dose for reduced renal function alerts, DDI alerts, duplicate therapy alerts

3,064 alerts

4.2 %

[39]

↑[a]

Alert acceptance was influenced by the weekday—it was highest on Fridays, decreased on all other workdays except for Wednesdays and Sundays and was least on Mondays.

OR Mondays (reference Fridays) [0.79 (0.75–0.83)]

Retrospective study

Primary care and in-patient setting

24 months

Epic® Care

Dose alerts, DDI alerts, DAI alerts

517,286 alerts

12.8 %

[41]

Moment of alert display in the workflow

↓[a]

Alerts were accepted less often when alerts interrupted prescribers in their workflow.

p = 0.026

Systematic review

Primary care and in-patient setting

Not specified

Diverse

Not specified

Not specified

[59]

Season

↑[a]

Alert acceptance varied according to the season of the year and for alert types. Dose alerts were more frequently accepted in fall, DDI, and DAI alerts in winter.

OR dose alerts fall (reference spring) OR [1.11 (1.07–1.15)], OR DAI alerts winter (reference summer) OR [1.15 (1.07–1.24)]

Retrospective study

Primary care and in-patient setting

24 months

Epic® Care

Dose alerts, DDI alerts, DAI alerts

517,286 alerts

12.8 %

[41]

Night shift

↓[a]

Alert acceptance was influenced by shift time. Alerts according to prescriptions at night shifts were accepted less frequently.

OR [0.47 (0.24–0.91)]

Prospective study

In-patient setting

1.5 months

SafeRx® CDSS

Dosing alerts, inadequate dose for reduced renal function alerts, DDI alerts, duplicate therapy alerts

3,064 alerts

4.2 %

[39]

Pharmacist involvement and guidance

↑[a]

Acceptance rate was higher when pharmacists were involved and guidance was given.

p = 0.027

Retrospective study

In-patient setting

3 months

Not specified

Renal dose adjustment alerts

2,341 alerts

Not specified

[57]

INVOLVED DRUG

Drug triggering the alert

↑[a]

Alert acceptance was influenced by medication category.

p < 0.0001

Retrospective study

In-patient setting

3 months

Not specified

Renal dose adjustment alerts

2,341 alerts

Not specified

[57]

↑[a]

Acceptance rates were higher for central nervous system drugs, endocrine and metabolic drugs, gastrointestinal agents, and respiratory agents.

OR [0.67 (0.59–0.76)]

OR [0.84 (0.74–0.96)]

OR [0.6 (0.53–0.67)]

OR [0.85 (0.75–0.96)]

Retrospective study

In-patient setting

18 months

DARWIN's CDSS

Age, DAI, disease, duplication, gender, lactation, pregnancy, route, DDI, dosage alerts

102,887 alerts

36.23 %

[62]

Critical dose drugs

↑[a]

Alert acceptance was positively correlated for critical dose drugs.

OR [1.13 (1.07–1.21)]

Retrospective study

Primary care and in-patient setting

12 months

Not specified

DDI alerts

50,788 alerts

18.3–46.7 % according to site

[33]

Severity of resulting adverse drug event

↑[a]

Alert acceptance increased according to the severity of the typical ADE.

+ 3.3 %

OR [3.30 (2.14–4.47)]

Expert panel review

Primary care

10 months

Cerner

DDI alerts

229,663 alerts

8.8 % (baseline)

[52]

Abbreviations: ▴, descriptive evaluation of influence on acceptance; ♦, interventional study design; ↑, positive correlation with alert acceptance; ↓, negative correlation with alert acceptance; ADE, adverse drug event; CI, confidence interval; DAI, drug–allergy interaction; DDI, drug–drug interaction; EHR, electronic health record; IRR, incident rate ratio; mEq/L, milliequivalent per liter; OR, odds ratio; p, p-value; r, Spearman's rank correlation coefficient; Ref., reference; RR, relative risk.


a Statistically significant effect on alert acceptance.


b Several modulators were grouped to one single intervention.[64]



#
#

Discussion

In this review, 391 published parameters potentially modulating the acceptance of medication alerts were compiled into a comprehensive model consisting of 75 distinct factors, summarized as 25 determinants belonging to five categories. The five categories were investigated to varying degrees: Most of the quantitative parameters were extracted in the category “clinical decision support system” and least in the category “involved drug” showing clearly in which sectors it seems to be easier to adjust alerts to increase acceptance rates (e.g., interruptive vs. non-interruptive alerts[45] [56]) than in others (e.g., alerts on neuromuscular drugs or topical products[57] [62] were least accepted but still, the respective drugs need to be prescribed when indicated). More than a quarter of all factors were described only qualitatively, and another 25 % of the factors were inconclusive, meaning that these factors did not significantly influence the acceptance for various reasons.

Clinical decision support system: Most of all studied factors (n = 32) were assigned to the category “clinical decision support system” but for only 10 of them quantitative effects were reported and eight factors showed increasing alert acceptance.

In contrast, the integration of laboratory data such as potassium levels lowered alert acceptance in normal-risk patients when levels associated with hyperkalemia were displayed in the alert.[49] The impact of this factor on alert acceptance would be unexpected as more patient-specific alerts have already increased acceptance rates.[12] Duke and coworkers discussed potential reasons for this finding, suggesting that overall alert acceptance was poor or that patients with hyperkalemia often were patients with renal failure already on hemodialysis[49] and hence under close monitoring.

Alert frequency was one of two factors in the model for which different articles reported different effects on alert acceptance: in one study, alert acceptance increased for repeated alerts whereas it decreased in another study for repeated alerts of the same medication and patient.[33] [54]

In general, parameters concerning the CDSS are difficult to transfer from one setting to another because even small differences in alert display,[33] [64] in allocation and filtering of particular severity levels,[43] [46] [61] or in the inclusion of context factors (and thus integration of the system in the hospital framework[92]) could have different effects.

However, the alert type as one single factor was analyzed in seven different settings with six different CDSS software vendors (SafeRx®-CDSS, in-house system of Brigham Integrated Clinical Information System, Epic®, Cerner, DARWIN's CDSS, and the stand-alone system AiDKlinik ®) and had equal impact on alert acceptance.[39] [40] [41] [48] [54] [62] [65] Due to the fact that DAI alerts were accepted more often than DDI alerts in four settings[41] [48] [54] [62] and less often in only one setting,[40] it can be discussed whether study and implementation settings varied too much to merge these parameters into one factor. Moreover, both studies using Cerner's CDSS software and compared DDI and DAI, achieved higher acceptance rates for DAI.[48] [54] Hence, it seems to be recommendable to only compare study settings with the same CDSS software vendor and alert types.

Upon closer investigation, not only the settings in which CDSSs were implemented led to variable effects, but also the method used to measure the influencing factor. Three independent articles analyzed the time needed to resolve an alert by calculating a “think time” or a “dwell time,” respectively.[43] [93] [94] The time interval measured started in both cases with the appearance of the alert, and ended when the selected actions were completed: either when the alert was closed, or when the alert was resolved.[43] [93] [94] Elias and coworkers reported that most alerts were closed in less than 3 seconds.[93] In the emergency department described in the article from Todd and coworkers physicians needed a mean of 7.06 seconds,[94] whereby Schreiber and coworkers combined adaption of alert severity levels with time measurement and influence on alert acceptance so that comparability cannot be given.[43]

Our findings concerning the category “clinical decision support system” are partially in agreement with the previously published literature considering alert appropriateness, which confirms that technology factors are the factors most often reported and as having the greatest influence on alert acceptance.[38]

Care provider: Considering the provider-related quantitative and consistent factors, most (8 out of 12) were positively correlated with alert acceptance. Conversely, the remaining four factors consistently reduced alert acceptance and concerned either the care providers' workload or work experience.[54] [55] [66] Increasing exposure to digital solutions appeared to increase digital literacy and thus might explain why alert acceptance increased for those clinicians with more experience in electronic prescribing.[54] In addition, also the professional background was proposed to modulate alert acceptance because nurse practitioners were four times more likely to accept an alert than physicians[13]; however, physicians are usually responsible for accepting or overriding alerts. Furthermore, Gadhiya and coworkers described that alert acceptance decreased as the experience of postgraduate residents increased, and discussed this finding in the context of alert desensitization and care providers' exposure to a large number of alerts.[66] As this finding might oppose the fact that digital literacy increases acceptance, it must be considered that in this case, first, second, and third year residents were compared, potentially influencing other variables like an increasing workload and higher number of patients caring of. Based on the results of these two similar factors (longer experience in using electronic health record [EHR] or electronic prescribing increased and physicians' years of residency decreased acceptance), it can again be shown very well that the parameters extracted from different studies need to be compared with caution.

Patient: Regarding the seven factors in the category “patient,” four fostered acceptance and one factor showed contradictory effects. Patient characteristics such as the surroundings in which the patient was treated can influence alert acceptance whereas alerts in the in-patient setting were more frequently accepted than in the outpatient setting.[33] [67] Furthermore, one study reported that alerts were more often accepted if they concerned male patients.[57] However, this result remained unconfirmed in other studies[54] and might be influenced by other factors not assessed in this study. Likewise, various articles showed that patients' age did not affect alert acceptance by care providers[54] [57] and care providers' sex and age also did not affect acceptance.[13] [50] On the contrary, patients' complexity in presence of risk factors such as an elevated severity score or comorbidities fostered alert acceptance.[62] Another factor regarding patient variables focused on laboratory values affecting alert acceptance in different ways depending on the analyzed laboratory value. For instance, displaying of patient's potassium levels lower than 3.9 mEq/L decreased alert acceptance[49] whereby displaying laboratory values describing renal insufficiency increased alert acceptance.[57]

Setting: Setting parameters were also reported as important variables influencing alert acceptance whereby three out of nine factors increased and two decreased acceptance. Alerts triggered by prescriptions during night shifts were accepted less frequently than day shifts,[39] and the season and weekday were also found to affect alert acceptance,[39] [41] suggesting that specific measures must be taken to increase alert acceptance in time periods where alert acceptance seems to be reduced. Concerning the context of working environment, acceptance rates increased with pharmacist involvement and guidance,[57] whereas alerts were accepted less often when they interrupted prescribers in their workflow[59] although literature data for the latter factor did not confirm this finding.[37]

Involved drug: Regarding the involved drug as such, only three factors were reported modulating alert acceptance. Alert acceptance significantly increased according to the drug triggering the alert (anticonvulsants > miscellaneous drugs > antimicrobials > cardiovascular drugs > H2 antagonists > antihistamines > hypoglycemic drugs > antihypertensive drugs > analgesics vs. neuromuscular drugs,[57] or gastrointestinal agents > central nervous system drugs, respiratory agents > endocrine and metabolic drugs > antineoplastic drugs > miscellaneous products > genitourinary agents > cardiovascular agents > neuromuscular drugs > hematological agents > nutritional products > analgesics and anesthetics > anti-infective agents > biologicals > topical products[62]). Acceptance was higher for critical dose drugs,[33] and increased by 3.3 % according to the severity of the typical adverse drug event provoked by the drug itself.[52]

Further Implications on Alert Acceptance

In general, it can be said that various factors potentially modulating alert acceptance were already identified although the true impact of numerous factors is still unknown. This is due to the fact that more than half of all factors are qualitative and/or showed inconclusive results when analyzed in different studies. As effect sizes of alert acceptance metrics and study designs differ widely ([Table 1]) and to increase comparability in future studies, ideas and rules for the ideal alert and its measurement had already been defined (i.e., CREATOR rules,[95] measuring of acceptance rates using event analysis[59]) as well as general alert metrics assessing alert acceptance in a quantitative way.[95]

Considering the currently gathered evidence, it can be assumed that greatest effects for future CDSS implementation and development can be reached by adapting factors of the category “clinical decision support system.” These mainly technical factors seem customizable—a user-centered design can be adapted by vendors, alerts can be addressed to appropriate users (whereby alert appropriateness in general can differ between different professionals) and placed in the right position, or the handling of the alerts can be optimized by less mouse clicks or providing of shortcuts. The inclusion of the stakeholder's perspectives and continuous quality assurance and improvement of alerts together with interdisciplinary expert panels showed positive signals for alert optimization thus contributing to better acceptance.[23] However, factors such as alert content or alert specificity are mentioned frequently, but due to the lack of an impossible “one-size-fits-all” approach, specific alerts are still rare.[12]

Regarding both human categories “care provider” and “patient,” only few factors can be optimized without huge procedural changes for example in the workload (e.g., user training to foster substantial knowledge and thus the assessment of alert relevance as it could have been shown that care providers value relevant alerts[50] [52]) whereby it also seems conceivable to adjust rigid factors like education, specialization, or work experience with longer-term training interventions. Consequently, it is important to emphasize that well-educated care providers experienced in using EHR or electronic prescribing and comprehending basic functionalities of the systems and the way they are working are more capable of assessing alerts' relevance and knowing underlying scientific evidence. It does not seem to be important whether care providers have these skills from the beginning of their career or acquire them later on, but all of these skills positively influenced alert acceptance according to the model and it is known that the more accepted alerts, the safer pharmacotherapy seems to be.[4] [96]

When optimization of the factor seems impossible (e.g., complex patients or alerting at night shifts, in different seasons, in the in-patient or outpatient setting, for specific necessary drugs, or at specific clock times), again technical improvements in the CDSS could take effect. An example of this could be changing their mode from non-interruptive to interruptive alerting at night when only few medications are prescribed. However, it should also be noted that with our model it was not distinguished between more or less meaningful or modifiable factors (i.e., alert display, tailoring of alerts, or moment of alert display in the workflow vs. season, in-patient/outpatient setting, or sociodemographic data).

To go further, this taxonomic model hierarchically classifying modulators of alert acceptance has to be understood as a starting point to receive more summarized evidence and to understand context and relationships of individual modulators influencing alert acceptance. The complex intervention reported by Muylle and coworkers consists for example of parameters that can be allocated to various factors (i.e., inclusion of patient-specific context factors, tiering of alert according to severity, and filtering, clustering or deactivation of alerts) in this model[60] [64] and as they evaluated several factors at one time and although the intervention had a significant impact overall, the impact of each single factor was only partly sufficient for significance.

So, in future studies an ontology is to be established that necessarily encompasses also complex modulators. These modulators consist of more than one adapted component and are fragmented into single components that are related to each other. As the single components contain as few study-specific dependencies as possible, at the time of building this ontological construct, study-specific characteristics are reduced and transferability as well as a set of acceptance-enhancing interventions is extended.


#

Limitations

Several limitations are worth mentioning. First, we conducted a review including most but not all applicable elements of the PRISMA guideline[25] and searched for literature in only one database (PubMed). Furthermore, gray literature was not considered and only studies published in English or German were included suggesting that not all available evidence was captured and that the risk of publication bias cannot be excluded. However, it was the aim of this work to identify as many modulators of alert acceptance as possible, favorably assessed in a quantitative way. Due to the narrative approach combined with the ongoing search strategy after the initial search time, we expected nevertheless to cover the majority of available factors. Second, each factor influencing alert acceptance was simply assigned to one single determinant and each determinant to one category as classification of all modulators in the model is ensured. Yet, several parameters could have been assigned to various factors meaning that classification of the modulators and the naming of the variables were also subjective processes to a certain extent. This means that bias and a potential risk of inconsistency cannot be discounted, despite two reviewers having assigned the modulators independently and discussed differences until congruency was reached. It is at least as important to mention that extracted parameters from complex interventions composed of different parameters were allocated to various[64] or the most appropriate factors[60] according to the description in the original article. Each quantitative parameter is explained in [Table 1] so that complex interventions are also presented as transparent as the original article allows. Third, there were differences according to diverse study designs (the majority of the included modulators were assessed in observational studies) and interventions dealing with a broad range of assessed alerts (90–2,391,880 alerts), various CDSS software characteristics, and different alert types so that comparability could not be assured for each single parameter. Furthermore, alert acceptance was not calculated in a consistent way in all underlying articles. In particular, qualitatively reported modulators of alert acceptance underlie subjective views of the authors about their project and extraction was dependent on how an intervention or alert acceptance rate assessment was described. Hence, our review reports alert rates and alert acceptance rates as well as significance levels if mentioned in the original article. Intra-study consistency is thereby maintained, and in addition, strict inclusion and exclusion criteria were applied especially for articles assessing modulators of alert acceptance in a quantitative way.


#
#

Conclusion

In this review, we report modulators affecting alert acceptance identified from an extensive literature search and arranged them into a comprehensive model separately presenting effect sizes of quantitative modulators and reporting qualitative modulators. Given the fact that of 75 factors, only 54 were quantitatively analyzed, thereof only 33 with a significant and unambiguous result, this model helps to identify topics where further research is required. As many factors depended on the type of the alert and the setting, and due to differing individual study conditions, comparability and transferability of the presented effects on alert acceptance are difficult to analyze. It is recommended for future studies to assess alert acceptance in prospective, interventional studies ideally using multivariate regression models to detect comparative effect sizes of multiple modulators.


#

Clinical Relevance Statement

Medication alerts can enhance medication safety and reduce medication error rates, yet, a major challenge is their low acceptance rate often due to low specificity and sensitivity of the alerts. As general strategies to tailor alert quality as well as a compilation of modulators potentially influencing medication alert acceptance are lacking, a comprehensive overview about successful, inconclusive, and failing modulators of alert acceptance as well as their effect sizes (when investigated) was compiled. Studied domains with equivocal and insufficient information on their impact on alert acceptance are identified and comparability and transferability of modulators on alert acceptance are difficult to analyze.


#

Multiple Choice Questions

  1. To which category in the model could the most factors be assigned?

    • Clinical decision support system

    • Care provider

    • Patient

    • Setting

    Correct Answer: The correct answer is option a. Most (n = 32) of the 75 factors could be assigned to the category “clinical decision support system.”

  2. How many quantitative factors showed contradicting effects on alert acceptance?

    • 1

    • 2

    • 3

    • 4

    Correct Answer: The correct answer is option b. The quantitative factors “alert frequency” and “laboratory value” showed contradicting effects on alert acceptance.


#
#

Conflict of Interest

J.A.B., W.E.H., and H.M.S. are involved in the development of databases that can be used for clinical decision support systems. At the time of the study, W.E.H. was a shareholder of Dosing GmbH, a spinoff company distributing AiDKlinik ®. For any further conflicts of interest, all authors filled in the ICMJE form.

Acknowledgments

We thank Viktoria Wurmbach, Viktoria Jungreithmayr, Sophia Klasing, and Robert Moecker for carefully reviewing the allocation of all modulators into the comprehensive model, and Sophie Glockner for carefully proofreading this manuscript.

Protection of Human and Animal Subjects

No human and/or animal subjects were involved in this study.


Supplementary Material

  • References

  • 1 Zachariah M, Phansalkar S, Seidling HM. et al. Development and preliminary evidence for the validity of an instrument assessing implementation of human-factors principles in medication-related decision-support systems–I-MeDeSA. J Am Med Inform Assoc 2011; 18 (Suppl. 01) i62-i72
  • 2 Varghese J, Kleine M, Gessner SI, Sandmann S, Dugas M. Effects of computerized decision support system implementations on patient outcomes in inpatient care: a systematic review. J Am Med Inform Assoc 2018; 25 (05) 593-602
  • 3 Olufisayo O, Mohd Yusof M, Ezat Wan Puteh S. Enhancing CDSS alert appropriateness in clinical workflow using the Lean method. Stud Health Technol Inform 2018; 255: 112-116
  • 4 Carroll AE. Averting alert fatigue to prevent adverse drug reactions. JAMA 2019; 322 (07) 601
  • 5 Lee JD, See KA. Trust in automation: designing for appropriate reliance. Hum Factors 2004; 46 (01) 50-80
  • 6 Ranji SR, Rennke S, Wachter RM. Computerised provider order entry combined with clinical decision support systems to improve medication safety: a narrative review. BMJ Qual Saf 2014; 23 (09) 773-780
  • 7 Nuckols TK, Smith-Spangler C, Morton SC. et al. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis. Syst Rev 2014; 3: 56
  • 8 Jaspers MW, Smeulers M, Vermeulen H, Peute LW. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc 2011; 18 (03) 327-334
  • 9 Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003; 163 (12) 1409-1416
  • 10 Salili AR, Hammann F, Taegtmeyer AB. Preventing adverse drug events using clinical decision support systems [in German]. Ther Umsch 2015; 72 (11–12): 693-700
  • 11 van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13 (02) 138-147
  • 12 Seidling HM, Klein U, Schaier M. et al. What, if all alerts were specific - estimating the potential impact on drug interaction alert burden. Int J Med Inform 2014; 83 (04) 285-291
  • 13 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36
  • 14 Shah SN, Amato MG, Garlo KG, Seger DL, Bates DW. Renal medication-related clinical decision support (CDS) alerts and overrides in the inpatient setting following implementation of a commercial electronic health record: implications for designing more effective alerts. J Am Med Inform Assoc 2021; 28 (06) 1081-1087
  • 15 Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med 2003; 163 (21) 2625-2631
  • 16 Payne TH, Nichol WP, Hoey P, Savarino J. Characteristics and override rates of order checks in a practitioner order entry system. Proc AMIA Symp 2002; 602-606
  • 17 Coleman JJ, van der Sijs H, Haefeli WE. et al. On the alert: future priorities for alerts in clinical decision support for computerized physician order entry identified from a European workshop. BMC Med Inform Decis Mak 2013; 13: 111
  • 18 Czock D, Konias M, Seidling HM. et al. Tailoring of alerts substantially reduces the alert burden in computerized clinical decision support for drugs that should be avoided in patients with renal disease. J Am Med Inform Assoc 2015; 22 (04) 881-887
  • 19 Campbell R. The five “rights” of clinical decision support. J AHIMA 2013; 84 (10) 42-47 , quiz 48
  • 20 Berner ES. Clinical decision support systems: State of the Art. AHRQ Publication No 09–0069-EF. 2009. Rockville, MD: Agency for Healthcare Research and Quality;
  • 21 Haefeli WE, Seidling HM. Electronic decision support to promote medication safety [in German]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2018; 61 (03) 271-277
  • 22 Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3: 17
  • 23 Van Dort BA, Zheng WY, Sundar V, Baysari MT. Optimizing clinical decision support alerts in electronic medical records: a systematic review of reported strategies adopted by hospitals. J Am Med Inform Assoc 2021; 28 (01) 177-183
  • 24 Miller K, Mosby D, Capan M. et al. Interface, information, interaction: a narrative review of design and functional requirements for clinical decision support. J Am Med Inform Assoc 2018; 25 (05) 585-592
  • 25 Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6 (07) e1000097
  • 26 Keasberry J, Scott IA, Sullivan C, Staib A, Ashby R. Going digital: a narrative overview of the clinical and organisational impacts of eHealth technologies in hospital practice. Aust Health Rev 2017; 41 (06) 646-664
  • 27 Villaseñor S, Piscotty Jr RJ. The current state of e-prescribing: implications for advanced practice registered nurses. J Am Assoc Nurse Pract 2016; 28 (01) 54-61
  • 28 Seidling HM, Bates DW. Evaluating the impact of health IT on medication safety. Stud Health Technol Inform 2016; 222: 195-205
  • 29 Rolnick J, Downing NL, Shepard J. et al. Validation of test performance and clinical time zero for an electronic health record embedded severe sepsis alert. Appl Clin Inform 2016; 7 (02) 560-572
  • 30 McClure C, Jang SY, Fairchild K. Alarms, oxygen saturations, and SpO2 averaging time in the NICU. J Neonatal Perinatal Med 2016; 9 (04) 357-362
  • 31 Chou E, Boyce RD, Balkan B. et al. Designing and evaluating contextualized drug-drug interaction algorithms. JAMIA Open 2021; 4 (01) b023
  • 32 Olakotan OO, Mohd Yusof M. The appropriateness of clinical decision support systems alerts in supporting clinical workflows: a systematic review. Health Informatics J 2021;27(02)14604582211007536
  • 33 Seidling HM, Phansalkar S, Seger DL. et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011; 18 (04) 479-484
  • 34 Payne TH, Hines LE, Chan RC. et al. Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc 2015; 22 (06) 1243-1250
  • 35 Khairat S, Marc D, Crosby W, Al Sanousi A. Reasons for physicians not adopting clinical decision support systems: critical analysis. JMIR Med Inform 2018; 6 (02) e24
  • 36 Olakotan OO, Yusof MM. Evaluating the alert appropriateness of clinical decision support systems in supporting clinical workflow. J Biomed Inform 2020; 106: 103453
  • 37 Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330 (7494): 765
  • 38 Olakotan O, Mohd Yusof M, Ezat Wan Puteh S. A systematic review on CDSS alert appropriateness. Stud Health Technol Inform 2020; 270: 906-910
  • 39 Zenziper Straichman Y, Kurnik D, Matok I. et al. Prescriber response to computerized drug alerts for electronic prescriptions among hospitalized patients. Int J Med Inform 2017; 107: 70-75
  • 40 Nanji KC, Seger DL, Slight SP. et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc 2018; 25 (05) 476-481
  • 41 Dexheimer JW, Kirkendall ES, Kouril M. et al. The effects of medication alerts on prescriber response in a pediatric hospital. Appl Clin Inform 2017; 8 (02) 491-501
  • 42 Cuéllar Monreal MJ, Reig Aguado J, Font Noguera I, Poveda Andrés JL. Reduction in alert fatigue in an assisted electronic prescribing system, through the Lean Six Sigma methodology. Farm Hosp 2017; 41 (01) 14-30
  • 43 Schreiber R, Gregoire JA, Shaha JE, Shaha SH. Think time: a novel approach to analysis of clinicians' behavior after reduction of drug-drug interaction alerts. Int J Med Inform 2017; 97: 59-67
  • 44 Brodowy B, Nguyen D. Optimization of clinical decision support through minimization of excessive drug allergy alerts. Am J Health Syst Pharm 2016; 73 (08) 526-528
  • 45 Cornu P, Steurbaut S, Gentens K, Van de Velde R, Dupont AG. Pilot evaluation of an optimized context-specific drug-drug interaction alerting system: a controlled pre-post study. Int J Med Inform 2015; 84 (09) 617-629
  • 46 Simpao AF, Ahumada LM, Desai BR. et al. Optimization of drug-drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard. J Am Med Inform Assoc 2015; 22 (02) 361-369
  • 47 Parke C, Santiago E, Zussy B, Klipa D. Reduction of clinical support warnings through recategorization of severity levels. Am J Health Syst Pharm 2015; 72 (02) 144-148
  • 48 Bryant AD, Fletcher GS, Payne TH. Drug interaction alert override rates in the Meaningful Use era: no evidence of progress. Appl Clin Inform 2014; 5 (03) 802-813
  • 49 Duke JD, Li X, Dexter P. Adherence to drug-drug interaction alerts in high-risk patients: a trial of context-enhanced alerting. J Am Med Inform Assoc 2013; 20 (03) 494-498
  • 50 Feblowitz J, Henkin S, Pang J. et al. Provider use of and attitudes towards an active clinical alert: a case study in decision support. Appl Clin Inform 2013; 4 (01) 144-152
  • 51 Miller AM, Boro MS, Korman NE, Davoren JB. Provider and pharmacist responses to warfarin drug-drug interaction alerts: a study of healthcare downstream of CPOE alerts. J Am Med Inform Assoc 2011; 18 (Suppl. 01) i45-i50
  • 52 Weingart SN, Seger AC, Feola N, Heffernan J, Schiff G, Isaac T. Electronic drug interaction alerts in ambulatory care: the value and acceptance of high-value alerts in US medical practices as assessed by an expert clinical panel. Drug Saf 2011; 34 (07) 587-593
  • 53 van der Sijs H, van Gelder T, Vulto A, Berg M, Aarts J. Understanding handling of drug safety alerts: a simulation study. Int J Med Inform 2010; 79 (05) 361-369
  • 54 Isaac T, Weissman JS, Davis RB. et al. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009; 169 (03) 305-311
  • 55 Wright A, Aaron S, Seger DL, Samal L, Schiff GD, Bates DW. Reduced effectiveness of interruptive drug-drug interaction alerts after conversion to a commercial electronic health record. J Gen Intern Med 2018; 33 (11) 1868-1876
  • 56 Ramirez M, Maranon R, Fu J. et al. Primary care provider adherence to an alert for intensification of diabetes blood pressure medications before and after the addition of a “chart closure” hard stop. J Am Med Inform Assoc 2018; 25 (09) 1167-1174
  • 57 Choi KS, Lee E, Rhie SJ. Impact of pharmacists' interventions on physicians' decision of a knowledge-based renal dosage adjustment system. Int J Clin Pharm 2019; 41 (02) 424-433
  • 58 Bubp JL, Park MA, Kapusnik-Uner J. et al. Successful deployment of drug-disease interaction clinical decision support across multiple Kaiser Permanente regions. J Am Med Inform Assoc 2019; 26 (10) 905-910
  • 59 Hussain MI, Reynolds TL, Zheng K. Medication safety alert fatigue may be reduced via interaction design and clinical role tailoring: a systematic review. J Am Med Inform Assoc 2019; 26 (10) 1141-1149
  • 60 Muylle KM, Gentens K, Dupont AG, Cornu P. Evaluation of context-specific alerts for potassium-increasing drug-drug interactions: a pre-post study. Int J Med Inform 2020; 133: 104013
  • 61 Knight AM, Maygers J, Foltz KA, John IS, Yeh HC, Brotman DJ. The effect of eliminating intermediate severity drug-drug interaction alerts on overall medication alert burden and acceptance rate. Appl Clin Inform 2019; 10 (05) 927-934
  • 62 Yoo J, Lee J, Rhee PL. et al. Alert override patterns with a medication clinical decision support system in an academic emergency department: retrospective descriptive study. JMIR Med Inform 2020; 8 (11) e23351
  • 63 Al-Jazairi AS, AlQadheeb EK, AlShammari LK. et al. Clinical validity assessment of integrated dose range checking tool in a tertiary care hospital using an electronic health information system. Hosp Pharm 2021; 56 (02) 95-101
  • 64 Muylle KM, Gentens K, Dupont AG, Cornu P. Evaluation of an optimized context-aware clinical decision support system for drug-drug interaction screening. Int J Med Inform 2021; 148: 104393
  • 65 Bittmann JA, Rein EK, Metzner M, Haefeli WE, Seidling HM. The acceptance of interruptive medication alerts in an electronic decision support system differs between different alert types. Methods Inf Med 2021; 60 (05–06): 180-184
  • 66 Gadhiya K, Zamora E, Saiyed SM, Friedlander D, Kaelber DC. Drug alert experience and salience during medical residency at two healthcare institutions. Appl Clin Inform 2021; 12 (02) 355-361
  • 67 Joglekar NN, Patel Y, Keller MS. Evaluation of clinical decision support to reduce sedative-hypnotic prescribing in older adults. Appl Clin Inform 2021; 12 (03) 436-444
  • 68 Park HA. Health informatics in developing countries: a review of unintended consequences of IT implementations, as they affect patient safety and recommendations on how to address them. Yearb Med Inform 2016; (01) 1-2
  • 69 Phansalkar S, Zachariah M, Seidling HM, Mendes C, Volk L, Bates DW. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J Am Med Inform Assoc 2014; 21 (e): e332-e340
  • 70 Strasberg HR, Chan A, Sklar SJ. Inter-rater agreement among physicians on the clinical significance of drug-drug interactions. AMIA Annu Symp Proc 2013; 2013: 1325-1328
  • 71 Eschmann E, Beeler PE, Zünd G, Blaser J. Evaluation of alerts for potassium-increasing drug-drug-interactions. Stud Health Technol Inform 2013; 192: 1056
  • 72 van der Sijs H, Baboe I, Phansalkar S. Human factors considerations for contraindication alerts. Stud Health Technol Inform 2013; 192: 132-136
  • 73 Eschmann E, Beeler PE, Kaplan V, Schneemann M, Zünd G, Blaser J. Clinical decision support for monitoring drug-drug-interactions and potassium-increasing drug combinations: need for specific alerts. Stud Health Technol Inform 2012; 180: 1200-1202
  • 74 Kane-Gill SL, O'Connor MF, Rothschild JM. et al. Technologic distractions (Part 1): summary of approaches to manage alert quantity with intent to reduce alert fatigue and suggestions for alert fatigue metrics. Crit Care Med 2017; 45 (09) 1481-1488
  • 75 Genco EK, Forster JE, Flaten H. et al. Clinically inconsequential alerts: the characteristics of opioid drug alerts and their utility in preventing adverse drug events in the emergency department. Ann Emerg Med 2016; 67 (02) 240.e3-248.e3
  • 76 Khalifa M, Zabani I. Improving utilization of clinical decision support systems by reducing alert fatigue: strategies and recommendations. Stud Health Technol Inform 2016; 226: 51-54
  • 77 Footracer KG. Alert fatigue in electronic health records. JAAPA 2015; 28 (07) 41-42
  • 78 Topaz M, Seger DL, Lai K. et al. High override rate for opioid drug-allergy interaction alerts: current trends and recommendations for future. Stud Health Technol Inform 2015; 216: 242-246
  • 79 McCoy AB, Thomas EJ, Krousel-Wood M, Sittig DF. Clinical decision support alert appropriateness: a review and proposal for improvement. Ochsner J 2014; 14 (02) 195-202
  • 80 Nanji KC, Slight SP, Seger DL. et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014; 21 (03) 487-491
  • 81 Smithburger PL, Buckley MS, Bejian S, Burenheide K, Kane-Gill SL. A critical evaluation of clinical decision support for the detection of drug-drug interactions. Expert Opin Drug Saf 2011; 10 (06) 871-882
  • 82 Riedmann D, Jung M, Hackl WO, Stühlinger W, van der Sijs H, Ammenwerth E. Development of a context model to prioritize drug safety alerts in CPOE systems. BMC Med Inform Decis Mak 2011; 11: 35
  • 83 Duke JD, Bolchini D. A successful model and visual design for creating context-aware drug-drug interaction alerts. AMIA Annu Symp Proc 2011; 2011: 339-348
  • 84 Smithburger PL, Kane-Gill SL, Benedict NJ, Falcione BA, Seybert AL. Grading the severity of drug-drug interactions in the intensive care unit: a comparison between clinician assessment and proprietary database severity rankings. Ann Pharmacother 2010; 44 (11) 1718-1724
  • 85 Lee EK, Wu TL, Senior T, Jose J. Medical alert management: a real-time adaptive decision support tool to reduce alert fatigue. AMIA Annu Symp Proc 2014; 2014: 845-854
  • 86 Tolley CL, Slight SP, Husband AK, Watson N, Bates DW. Improving medication-related clinical decision support. Am J Health Syst Pharm 2018; 75 (04) 239-246
  • 87 Chazard E, Beuscart JB, Rochoy M. et al. Statistically prioritized and contextualized clinical decision support systems, the future of adverse drug events prevention?. Stud Health Technol Inform 2020; 270: 683-687
  • 88 Poly TN, Islam MM, Yang HC, Li YJ. Appropriateness of overridden alerts in computerized physician order entry: systematic review. JMIR Med Inform 2020; 8 (07) e15653
  • 89 Wan PK, Satybaldy A, Huang L, Holtskog H, Nowostawski M. Reducing alert fatigue by sharing low-level alerts with patients and enhancing collaborative decision making using blockchain technology: scoping review and proposed framework (MedAlert). J Med Internet Res 2020; 22 (10) e22013
  • 90 Van Dort BA, Zheng WY, Sundar V, Baysari MT. Optimizing clinical decision support alerts in electronic medical records: a systematic review of reported strategies adopted by hospitals. J Am Med Inform Assoc 2021; 28 (01) 177-183
  • 91 Ford E, Edelman N, Somers L. et al. Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inform Decis Mak 2021; 21 (01) 193
  • 92 Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010; 29 (04) 655-663
  • 93 Elias P, Peterson E, Wachter B, Ward C, Poon E, Navar AM. Evaluating the impact of interruptive alerts within a health system: use, response time, and cumulative time burden. Appl Clin Inform 2019; 10 (05) 909-917
  • 94 Todd B, Shinthia N, Nierenberg L, Mansour L, Miller M, Otero R. Impact of electronic medical record alerts on emergency physician workflow and medical management. J Emerg Med 2021; 60 (03) 390-395
  • 95 McGreevey III JD, Mallozzi CP, Perkins RM, Shelov E, Schreiber R. Reducing alert burden in electronic health records: state of the art recommendations from four health systems. Appl Clin Inform 2020; 11 (01) 1-12
  • 96 Moja L, Kwag KH, Lytras T. et al. Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am J Public Health 2014; 104 (12) e12-e22
  • 97 Khajouei R, Jaspers MW. The impact of CPOE medication systems' design aspects on usability, workflow and medication orders: a systematic review. Methods Inf Med 2010; 49 (01) 3-19

Address for correspondence

Hanna M. Seidling, Prof. Dr. sc. hum.
Department of Clinical Pharmacology and Pharmacoepidemiology, Cooperation Unit Clinical Pharmacy
Im Neuenheimer Feld 410, 69120 Heidelberg
Germany   

Publication History

Received: 09 November 2021

Accepted: 04 March 2022

Article published online:
18 August 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Zachariah M, Phansalkar S, Seidling HM. et al. Development and preliminary evidence for the validity of an instrument assessing implementation of human-factors principles in medication-related decision-support systems–I-MeDeSA. J Am Med Inform Assoc 2011; 18 (Suppl. 01) i62-i72
  • 2 Varghese J, Kleine M, Gessner SI, Sandmann S, Dugas M. Effects of computerized decision support system implementations on patient outcomes in inpatient care: a systematic review. J Am Med Inform Assoc 2018; 25 (05) 593-602
  • 3 Olufisayo O, Mohd Yusof M, Ezat Wan Puteh S. Enhancing CDSS alert appropriateness in clinical workflow using the Lean method. Stud Health Technol Inform 2018; 255: 112-116
  • 4 Carroll AE. Averting alert fatigue to prevent adverse drug reactions. JAMA 2019; 322 (07) 601
  • 5 Lee JD, See KA. Trust in automation: designing for appropriate reliance. Hum Factors 2004; 46 (01) 50-80
  • 6 Ranji SR, Rennke S, Wachter RM. Computerised provider order entry combined with clinical decision support systems to improve medication safety: a narrative review. BMJ Qual Saf 2014; 23 (09) 773-780
  • 7 Nuckols TK, Smith-Spangler C, Morton SC. et al. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis. Syst Rev 2014; 3: 56
  • 8 Jaspers MW, Smeulers M, Vermeulen H, Peute LW. Effects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of high-quality systematic review findings. J Am Med Inform Assoc 2011; 18 (03) 327-334
  • 9 Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 2003; 163 (12) 1409-1416
  • 10 Salili AR, Hammann F, Taegtmeyer AB. Preventing adverse drug events using clinical decision support systems [in German]. Ther Umsch 2015; 72 (11–12): 693-700
  • 11 van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 2006; 13 (02) 138-147
  • 12 Seidling HM, Klein U, Schaier M. et al. What, if all alerts were specific - estimating the potential impact on drug interaction alert burden. Int J Med Inform 2014; 83 (04) 285-291
  • 13 Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R. with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (01) 36
  • 14 Shah SN, Amato MG, Garlo KG, Seger DL, Bates DW. Renal medication-related clinical decision support (CDS) alerts and overrides in the inpatient setting following implementation of a commercial electronic health record: implications for designing more effective alerts. J Am Med Inform Assoc 2021; 28 (06) 1081-1087
  • 15 Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med 2003; 163 (21) 2625-2631
  • 16 Payne TH, Nichol WP, Hoey P, Savarino J. Characteristics and override rates of order checks in a practitioner order entry system. Proc AMIA Symp 2002; 602-606
  • 17 Coleman JJ, van der Sijs H, Haefeli WE. et al. On the alert: future priorities for alerts in clinical decision support for computerized physician order entry identified from a European workshop. BMC Med Inform Decis Mak 2013; 13: 111
  • 18 Czock D, Konias M, Seidling HM. et al. Tailoring of alerts substantially reduces the alert burden in computerized clinical decision support for drugs that should be avoided in patients with renal disease. J Am Med Inform Assoc 2015; 22 (04) 881-887
  • 19 Campbell R. The five “rights” of clinical decision support. J AHIMA 2013; 84 (10) 42-47 , quiz 48
  • 20 Berner ES. Clinical decision support systems: State of the Art. AHRQ Publication No 09–0069-EF. 2009. Rockville, MD: Agency for Healthcare Research and Quality;
  • 21 Haefeli WE, Seidling HM. Electronic decision support to promote medication safety [in German]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2018; 61 (03) 271-277
  • 22 Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020; 3: 17
  • 23 Van Dort BA, Zheng WY, Sundar V, Baysari MT. Optimizing clinical decision support alerts in electronic medical records: a systematic review of reported strategies adopted by hospitals. J Am Med Inform Assoc 2021; 28 (01) 177-183
  • 24 Miller K, Mosby D, Capan M. et al. Interface, information, interaction: a narrative review of design and functional requirements for clinical decision support. J Am Med Inform Assoc 2018; 25 (05) 585-592
  • 25 Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6 (07) e1000097
  • 26 Keasberry J, Scott IA, Sullivan C, Staib A, Ashby R. Going digital: a narrative overview of the clinical and organisational impacts of eHealth technologies in hospital practice. Aust Health Rev 2017; 41 (06) 646-664
  • 27 Villaseñor S, Piscotty Jr RJ. The current state of e-prescribing: implications for advanced practice registered nurses. J Am Assoc Nurse Pract 2016; 28 (01) 54-61
  • 28 Seidling HM, Bates DW. Evaluating the impact of health IT on medication safety. Stud Health Technol Inform 2016; 222: 195-205
  • 29 Rolnick J, Downing NL, Shepard J. et al. Validation of test performance and clinical time zero for an electronic health record embedded severe sepsis alert. Appl Clin Inform 2016; 7 (02) 560-572
  • 30 McClure C, Jang SY, Fairchild K. Alarms, oxygen saturations, and SpO2 averaging time in the NICU. J Neonatal Perinatal Med 2016; 9 (04) 357-362
  • 31 Chou E, Boyce RD, Balkan B. et al. Designing and evaluating contextualized drug-drug interaction algorithms. JAMIA Open 2021; 4 (01) b023
  • 32 Olakotan OO, Mohd Yusof M. The appropriateness of clinical decision support systems alerts in supporting clinical workflows: a systematic review. Health Informatics J 2021;27(02)14604582211007536
  • 33 Seidling HM, Phansalkar S, Seger DL. et al. Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. J Am Med Inform Assoc 2011; 18 (04) 479-484
  • 34 Payne TH, Hines LE, Chan RC. et al. Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc 2015; 22 (06) 1243-1250
  • 35 Khairat S, Marc D, Crosby W, Al Sanousi A. Reasons for physicians not adopting clinical decision support systems: critical analysis. JMIR Med Inform 2018; 6 (02) e24
  • 36 Olakotan OO, Yusof MM. Evaluating the alert appropriateness of clinical decision support systems in supporting clinical workflow. J Biomed Inform 2020; 106: 103453
  • 37 Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330 (7494): 765
  • 38 Olakotan O, Mohd Yusof M, Ezat Wan Puteh S. A systematic review on CDSS alert appropriateness. Stud Health Technol Inform 2020; 270: 906-910
  • 39 Zenziper Straichman Y, Kurnik D, Matok I. et al. Prescriber response to computerized drug alerts for electronic prescriptions among hospitalized patients. Int J Med Inform 2017; 107: 70-75
  • 40 Nanji KC, Seger DL, Slight SP. et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc 2018; 25 (05) 476-481
  • 41 Dexheimer JW, Kirkendall ES, Kouril M. et al. The effects of medication alerts on prescriber response in a pediatric hospital. Appl Clin Inform 2017; 8 (02) 491-501
  • 42 Cuéllar Monreal MJ, Reig Aguado J, Font Noguera I, Poveda Andrés JL. Reduction in alert fatigue in an assisted electronic prescribing system, through the Lean Six Sigma methodology. Farm Hosp 2017; 41 (01) 14-30
  • 43 Schreiber R, Gregoire JA, Shaha JE, Shaha SH. Think time: a novel approach to analysis of clinicians' behavior after reduction of drug-drug interaction alerts. Int J Med Inform 2017; 97: 59-67
  • 44 Brodowy B, Nguyen D. Optimization of clinical decision support through minimization of excessive drug allergy alerts. Am J Health Syst Pharm 2016; 73 (08) 526-528
  • 45 Cornu P, Steurbaut S, Gentens K, Van de Velde R, Dupont AG. Pilot evaluation of an optimized context-specific drug-drug interaction alerting system: a controlled pre-post study. Int J Med Inform 2015; 84 (09) 617-629
  • 46 Simpao AF, Ahumada LM, Desai BR. et al. Optimization of drug-drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard. J Am Med Inform Assoc 2015; 22 (02) 361-369
  • 47 Parke C, Santiago E, Zussy B, Klipa D. Reduction of clinical support warnings through recategorization of severity levels. Am J Health Syst Pharm 2015; 72 (02) 144-148
  • 48 Bryant AD, Fletcher GS, Payne TH. Drug interaction alert override rates in the Meaningful Use era: no evidence of progress. Appl Clin Inform 2014; 5 (03) 802-813
  • 49 Duke JD, Li X, Dexter P. Adherence to drug-drug interaction alerts in high-risk patients: a trial of context-enhanced alerting. J Am Med Inform Assoc 2013; 20 (03) 494-498
  • 50 Feblowitz J, Henkin S, Pang J. et al. Provider use of and attitudes towards an active clinical alert: a case study in decision support. Appl Clin Inform 2013; 4 (01) 144-152
  • 51 Miller AM, Boro MS, Korman NE, Davoren JB. Provider and pharmacist responses to warfarin drug-drug interaction alerts: a study of healthcare downstream of CPOE alerts. J Am Med Inform Assoc 2011; 18 (Suppl. 01) i45-i50
  • 52 Weingart SN, Seger AC, Feola N, Heffernan J, Schiff G, Isaac T. Electronic drug interaction alerts in ambulatory care: the value and acceptance of high-value alerts in US medical practices as assessed by an expert clinical panel. Drug Saf 2011; 34 (07) 587-593
  • 53 van der Sijs H, van Gelder T, Vulto A, Berg M, Aarts J. Understanding handling of drug safety alerts: a simulation study. Int J Med Inform 2010; 79 (05) 361-369
  • 54 Isaac T, Weissman JS, Davis RB. et al. Overrides of medication alerts in ambulatory care. Arch Intern Med 2009; 169 (03) 305-311
  • 55 Wright A, Aaron S, Seger DL, Samal L, Schiff GD, Bates DW. Reduced effectiveness of interruptive drug-drug interaction alerts after conversion to a commercial electronic health record. J Gen Intern Med 2018; 33 (11) 1868-1876
  • 56 Ramirez M, Maranon R, Fu J. et al. Primary care provider adherence to an alert for intensification of diabetes blood pressure medications before and after the addition of a “chart closure” hard stop. J Am Med Inform Assoc 2018; 25 (09) 1167-1174
  • 57 Choi KS, Lee E, Rhie SJ. Impact of pharmacists' interventions on physicians' decision of a knowledge-based renal dosage adjustment system. Int J Clin Pharm 2019; 41 (02) 424-433
  • 58 Bubp JL, Park MA, Kapusnik-Uner J. et al. Successful deployment of drug-disease interaction clinical decision support across multiple Kaiser Permanente regions. J Am Med Inform Assoc 2019; 26 (10) 905-910
  • 59 Hussain MI, Reynolds TL, Zheng K. Medication safety alert fatigue may be reduced via interaction design and clinical role tailoring: a systematic review. J Am Med Inform Assoc 2019; 26 (10) 1141-1149
  • 60 Muylle KM, Gentens K, Dupont AG, Cornu P. Evaluation of context-specific alerts for potassium-increasing drug-drug interactions: a pre-post study. Int J Med Inform 2020; 133: 104013
  • 61 Knight AM, Maygers J, Foltz KA, John IS, Yeh HC, Brotman DJ. The effect of eliminating intermediate severity drug-drug interaction alerts on overall medication alert burden and acceptance rate. Appl Clin Inform 2019; 10 (05) 927-934
  • 62 Yoo J, Lee J, Rhee PL. et al. Alert override patterns with a medication clinical decision support system in an academic emergency department: retrospective descriptive study. JMIR Med Inform 2020; 8 (11) e23351
  • 63 Al-Jazairi AS, AlQadheeb EK, AlShammari LK. et al. Clinical validity assessment of integrated dose range checking tool in a tertiary care hospital using an electronic health information system. Hosp Pharm 2021; 56 (02) 95-101
  • 64 Muylle KM, Gentens K, Dupont AG, Cornu P. Evaluation of an optimized context-aware clinical decision support system for drug-drug interaction screening. Int J Med Inform 2021; 148: 104393
  • 65 Bittmann JA, Rein EK, Metzner M, Haefeli WE, Seidling HM. The acceptance of interruptive medication alerts in an electronic decision support system differs between different alert types. Methods Inf Med 2021; 60 (05–06): 180-184
  • 66 Gadhiya K, Zamora E, Saiyed SM, Friedlander D, Kaelber DC. Drug alert experience and salience during medical residency at two healthcare institutions. Appl Clin Inform 2021; 12 (02) 355-361
  • 67 Joglekar NN, Patel Y, Keller MS. Evaluation of clinical decision support to reduce sedative-hypnotic prescribing in older adults. Appl Clin Inform 2021; 12 (03) 436-444
  • 68 Park HA. Health informatics in developing countries: a review of unintended consequences of IT implementations, as they affect patient safety and recommendations on how to address them. Yearb Med Inform 2016; (01) 1-2
  • 69 Phansalkar S, Zachariah M, Seidling HM, Mendes C, Volk L, Bates DW. Evaluation of medication alerts in electronic health records for compliance with human factors principles. J Am Med Inform Assoc 2014; 21 (e): e332-e340
  • 70 Strasberg HR, Chan A, Sklar SJ. Inter-rater agreement among physicians on the clinical significance of drug-drug interactions. AMIA Annu Symp Proc 2013; 2013: 1325-1328
  • 71 Eschmann E, Beeler PE, Zünd G, Blaser J. Evaluation of alerts for potassium-increasing drug-drug-interactions. Stud Health Technol Inform 2013; 192: 1056
  • 72 van der Sijs H, Baboe I, Phansalkar S. Human factors considerations for contraindication alerts. Stud Health Technol Inform 2013; 192: 132-136
  • 73 Eschmann E, Beeler PE, Kaplan V, Schneemann M, Zünd G, Blaser J. Clinical decision support for monitoring drug-drug-interactions and potassium-increasing drug combinations: need for specific alerts. Stud Health Technol Inform 2012; 180: 1200-1202
  • 74 Kane-Gill SL, O'Connor MF, Rothschild JM. et al. Technologic distractions (Part 1): summary of approaches to manage alert quantity with intent to reduce alert fatigue and suggestions for alert fatigue metrics. Crit Care Med 2017; 45 (09) 1481-1488
  • 75 Genco EK, Forster JE, Flaten H. et al. Clinically inconsequential alerts: the characteristics of opioid drug alerts and their utility in preventing adverse drug events in the emergency department. Ann Emerg Med 2016; 67 (02) 240.e3-248.e3
  • 76 Khalifa M, Zabani I. Improving utilization of clinical decision support systems by reducing alert fatigue: strategies and recommendations. Stud Health Technol Inform 2016; 226: 51-54
  • 77 Footracer KG. Alert fatigue in electronic health records. JAAPA 2015; 28 (07) 41-42
  • 78 Topaz M, Seger DL, Lai K. et al. High override rate for opioid drug-allergy interaction alerts: current trends and recommendations for future. Stud Health Technol Inform 2015; 216: 242-246
  • 79 McCoy AB, Thomas EJ, Krousel-Wood M, Sittig DF. Clinical decision support alert appropriateness: a review and proposal for improvement. Ochsner J 2014; 14 (02) 195-202
  • 80 Nanji KC, Slight SP, Seger DL. et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014; 21 (03) 487-491
  • 81 Smithburger PL, Buckley MS, Bejian S, Burenheide K, Kane-Gill SL. A critical evaluation of clinical decision support for the detection of drug-drug interactions. Expert Opin Drug Saf 2011; 10 (06) 871-882
  • 82 Riedmann D, Jung M, Hackl WO, Stühlinger W, van der Sijs H, Ammenwerth E. Development of a context model to prioritize drug safety alerts in CPOE systems. BMC Med Inform Decis Mak 2011; 11: 35
  • 83 Duke JD, Bolchini D. A successful model and visual design for creating context-aware drug-drug interaction alerts. AMIA Annu Symp Proc 2011; 2011: 339-348
  • 84 Smithburger PL, Kane-Gill SL, Benedict NJ, Falcione BA, Seybert AL. Grading the severity of drug-drug interactions in the intensive care unit: a comparison between clinician assessment and proprietary database severity rankings. Ann Pharmacother 2010; 44 (11) 1718-1724
  • 85 Lee EK, Wu TL, Senior T, Jose J. Medical alert management: a real-time adaptive decision support tool to reduce alert fatigue. AMIA Annu Symp Proc 2014; 2014: 845-854
  • 86 Tolley CL, Slight SP, Husband AK, Watson N, Bates DW. Improving medication-related clinical decision support. Am J Health Syst Pharm 2018; 75 (04) 239-246
  • 87 Chazard E, Beuscart JB, Rochoy M. et al. Statistically prioritized and contextualized clinical decision support systems, the future of adverse drug events prevention?. Stud Health Technol Inform 2020; 270: 683-687
  • 88 Poly TN, Islam MM, Yang HC, Li YJ. Appropriateness of overridden alerts in computerized physician order entry: systematic review. JMIR Med Inform 2020; 8 (07) e15653
  • 89 Wan PK, Satybaldy A, Huang L, Holtskog H, Nowostawski M. Reducing alert fatigue by sharing low-level alerts with patients and enhancing collaborative decision making using blockchain technology: scoping review and proposed framework (MedAlert). J Med Internet Res 2020; 22 (10) e22013
  • 90 Van Dort BA, Zheng WY, Sundar V, Baysari MT. Optimizing clinical decision support alerts in electronic medical records: a systematic review of reported strategies adopted by hospitals. J Am Med Inform Assoc 2021; 28 (01) 177-183
  • 91 Ford E, Edelman N, Somers L. et al. Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inform Decis Mak 2021; 21 (01) 193
  • 92 Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010; 29 (04) 655-663
  • 93 Elias P, Peterson E, Wachter B, Ward C, Poon E, Navar AM. Evaluating the impact of interruptive alerts within a health system: use, response time, and cumulative time burden. Appl Clin Inform 2019; 10 (05) 909-917
  • 94 Todd B, Shinthia N, Nierenberg L, Mansour L, Miller M, Otero R. Impact of electronic medical record alerts on emergency physician workflow and medical management. J Emerg Med 2021; 60 (03) 390-395
  • 95 McGreevey III JD, Mallozzi CP, Perkins RM, Shelov E, Schreiber R. Reducing alert burden in electronic health records: state of the art recommendations from four health systems. Appl Clin Inform 2020; 11 (01) 1-12
  • 96 Moja L, Kwag KH, Lytras T. et al. Effectiveness of computerized decision support systems linked to electronic health records: a systematic review and meta-analysis. Am J Public Health 2014; 104 (12) e12-e22
  • 97 Khajouei R, Jaspers MW. The impact of CPOE medication systems' design aspects on usability, workflow and medication orders: a systematic review. Methods Inf Med 2010; 49 (01) 3-19

Zoom Image
Fig. 1 PRISMA flowchart describing the results of the literature search conducted to identify articles discussing modulators influencing alert acceptance (referred to Moher and coworkers[25]).
Zoom Image
Fig. 2 Overview of all modulators of alert acceptance classified by categories, determinants, and factors. Categories and determinants are ordered by the total number of parameters in parentheses, quantitative factors are shown on the left, and qualitative factors on the right (green filled squares: quantitative, consistent factor showing positive correlation with alert acceptance; red filled squares: quantitative, consistent factor showing negative correlation with alert acceptance; yellow filled squares: quantitative, inconsistent factor showing positive and negative correlation with alert acceptance; gray filled squares: quantitative, inconclusive factors without significant positive or negative assessment of alert acceptance; white squares: qualitative factors without any quantitative assessment of alert acceptance; number of parameters in parentheses: number labeled with “*” presents the number of parameters with statistically significant effect on alert acceptance; ↑: positive correlation with alert acceptance; ↓: negative correlation with alert acceptance; ↔: no significant correlation with alert acceptance; numbers without “*” describe the number of quantitative, inconclusive (↔), and qualitative parameters within this factor); #several modulators were grouped to one single intervention;[64] lab: laboratory.