Subscribe to RSS
DOI: 10.1055/s-0043-1777107
How Safe are Outpatient Electronic Health Records? An Evaluation of Medication-Related Decision Support using the Ambulatory Electronic Health Record Evaluation Tool
- Abstract
- Background and Significance
- Objectives
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Background The purpose of the Ambulatory Electronic Health Record (EHR) Evaluation Tool is to provide outpatient clinics with an assessment that they can use to measure the ability of the EHR system to detect and prevent common prescriber errors. The tool consists of a medication safety test and a medication reconciliation module.
Objectives The goal of this study was to perform a broad evaluation of outpatient medication-related decision support using the Ambulatory EHR Evaluation Tool.
Methods We performed a cross-sectional study with 10 outpatient clinics using the Ambulatory EHR Evaluation Tool. For the medication safety test, clinics were provided test patients and associated medication test orders to enter in their EHR, where they recorded any advice or information they received. Once finished, clinics received an overall percentage score of unsafe orders detected and individual order category scores. For the medication reconciliation module, clinics were asked to electronically reconcile two medication lists, where modifications were made by adding and removing medications and changing the dosage of select medications.
Results For the medication safety test, the mean overall score was 57%, with the highest score being 70%, and the lowest score being 40%. Clinics performed well in the drug allergy (100%), drug dose daily (85%), and inappropriate medication combinations (74%) order categories. Order categories with the lowest performance were drug laboratory (10%) and drug monitoring (3%). Most clinics (90%) scored a 0% in at least one order category. For the medication reconciliation module, only one clinic (10%) could reconcile medication lists electronically; however, there was no clinical decision support available that checked for drug interactions.
Conclusion We evaluated a sample of ambulatory practices around their medication-related decision support and found that advanced capabilities within these systems have yet to be widely implemented. The tool was practical to use and identified substantial opportunities for improvement in outpatient medication safety.
#
Background and Significance
The use of electronic health record systems (EHR) has become a standard practice across the United States in both inpatient and outpatient settings with almost all hospitals and most ambulatory clinics adopting them.[1] This was largely due to the Health Information Technology for Economic and Clinical Health Act of 2009, which provided $40 billion in federal incentives for hospitals and physician practices to adopt EHRs.[2] EHRs have been shown to lower the rate of preventable adverse drug events (ADEs), and the major way this is achieved is through medication clinical decision support (CDS) delivered at the point of care.[3] [4] [5] [6]
However, this decision support is not always sufficient to achieve the desired benefit. The impact of CDS has been studied in multiple evaluations. For example, an early study by Bates et al[7] in inpatients found that medical errors decreased by 86% when a range of CDS features including drug allergy and drug–drug interaction warnings among others were implemented. More recently, a study by Austin et al[8] found that the implementation of an EHR system can improve the clinical outcomes of patients requiring management of their anticoagulant medications. However, the implementation of CDS must be balanced, as having too many alerts can cause alert fatigue.[9] [10] Studies done in the ambulatory setting have found for example that overrides are frequent,[11] and that even important warnings are often overridden, limiting the impact of these interventions.
It is estimated that 4.5 million outpatient care visits are related to a preventable ADE, and of these visits, 400,000 are associated with hospitalizations.[12] Two early studies[13] [14] of ambulatory EHR implementation found that CDS delivered at the point of care can decrease the rate of preventable ADEs. In response to these safety concerns, the Ambulatory EHR Evaluation Tool was created and piloted with seven outpatient clinics.[15] The tool's purpose was to provide outpatient clinics with an assessment that they can use to measure the ability of their EHR system to detect and prevent common prescriber errors. The tool's methodology mirrors that of the inpatient version of the tool, which is administered by the Leapfrog Group which has been extensively validated.[16] [17] [18] [19] [20] [21] The pilot found that many clinics had basic decision support features like drug allergy checking implemented, but more advanced areas of decision support like drug laboratory checking were rarely implemented. In addition, in the tool's medication reconciliation module, although only three clinics had this functionality in their EHR, only one could showcase it during the evaluation.[15]
#
Objectives
In this study, our primary objective was to present the results from 10 outpatient clinics, which used the Ambulatory EHR Evaluation Tool. This includes (1) the qualitative and quantitative results from the medication safety test and medication reconciliation module and (2) qualitative details about these clinics' EHR configuration through the CDS Functionality questionnaire.
#
Methods
Study Design
This was a cross-sectional study, performed with 10 outpatient clinics across the United States. The evaluation began in 2020 and ended in 2022. To recruit these clinics, we first reached out to the original clinics that participated in the initial pilot and asked if they were interested in participating in the study. For the other clinics, we contacted ambulatory EHR vendors to suggest clinics and recommendations from previous clinics, which participated in the first pilot. The criteria used to choose these clinics were: (1) the clinic used an EHR system equipped with computerized physician order entry, (2) the clinic had resources to configure test patients, and (3) a licensed prescriber familiar with the EHR system that could enter the medication test orders. These criteria did not change based on the two objectives of the study.
#
Development and Testing Methodology of the Ambulatory Electronic Health Record Evaluation Tool
The Ambulatory EHR Evaluation Tool was developed in 2019 by researchers at the University of Utah, Brigham and Women's Hospital, and the Institute of Health Improvement.[15] Developers of the tool plan to make it publicly available soon and are actively identifying an organization to host the tool. The tool consists of a medication safety test and a medication reconciliation module. The medication safety test's methodology and scoring algorithm closely mirrors that of the inpatient version of the tool, which is administered by the Leapfrog Group. The tool also collects demographic information through the Pre-Test Questionnaire and collects information about clinics' EHR configuration through the CDS Functionality Questionnaire.
For the medication safety test, clinics first downloaded a set of test patients from the online tool, which they then programmed into their operational ambulatory EHR system. Details of these test patients included basic demographic information, allergy information, and relevant laboratory values ([Appendix Table 1]). Next, clinics entered each medication test order and recorded any advice or information they received during ordering and up to signing. These medication test orders were classified into 10 order-checking categories, ranging from basic to advanced decision support features ([Appendix Table 2]).[4] Two changes were made to the order categories based on the first pilot. The first was combining the drug–drug interaction and therapeutic duplication order categories into one category called “Inappropriate Medication Combinations.” Next, a new order category was added called, “Excessive Alerts.” Test orders in this category are nuisance orders, which are low-priority medication interactions that should not trigger an interruptive alert,[22] as they can contribute to alert fatigue. Other than test orders in this category, all other test orders are expected to trigger an alert. In addition, there are “fatal orders” in the test, which are high-severity medication orders, that if prescribed can lead to death. For this evaluation, these test orders were part of the drug allergy and inappropriate medication combination order categories.
In total, there were 44 test orders in the medication safety test. Of these, we included four completely normal and safe medication orders that were not used in the scoring of the test. The purpose of these test orders was to prevent gaming, i.e., in case a clinic recorded that they received advice on every test order to gain a higher score. As a result, the maximum value of the denominator was 40, and this could decrease based on the availability of a medication on a clinic's formulary. If a medication could not be ordered due to formulary issues, that test order was removed from the denominator and numerator. To calculate the overall percentage score, the numerator was the sum of the number of correctly alerted on test orders and the number of nuisance orders not alerted on, and the denominator was the total number of orderable test orders. In this evaluation, none of the clinics had formulary issues and were able to electronically order all medications in their test. For the individual order categories, these scores were calculated by taking the number of correctly scored test orders divided by the number of orderable test orders. The maximum denominator for each order category was four.
For the medication reconciliation module, clinics were provided with a test patient returning to their outpatient clinic after a recent hospitalization. Like the test patients in the medication safety test, the medication reconciliation patient had demographic information, allergies, and basic laboratory values. In addition, these patients had two medication lists: (1) the most recent medication list from their outpatient clinic and (2) the medication list from their recent hospitalization. Modifications were made to these lists by adding and removing medications and changing the dosage of select medications. During the test, clinics were asked to demonstrate how their EHR system would electronically reconcile these two medication lists.
#
Evaluation Process
The evaluation process mirrored the format used in the initial pilot,[15] where we coordinated three teleconference calls with each clinic. During the introductory call, the research team registered the clinics and clinics downloaded the test patients. In addition, clinics had the opportunity to take a sample test, so that they could familiarize themselves with the format of the tool. During the second conference call, the clinics took the medication safety test and medication reconciliation module, while the study team observed. Lastly, the third call was a debrief meeting, where we discussed the results from the tool with the clinic.
#
Statistical Analysis
For this evaluation, basic clinic demographic information was collected. These characteristics included: region, system membership, clinic services, the number of physicians, and the total visits per year. In addition, a survey was distributed to clinics to collect information about the configuration of these clinics' EHR systems. We report the responses clinics provided for each question in this survey. Next, for the medication safety test we calculated descriptive statistics, including the mean overall scores of the clinics and individual order category scores, and the range of scores. Lastly, qualitative results for the medication reconciliation module were reported.
The University of Utah's Institutional Review Board (00107070) deemed this study as nonhuman subject research. The Mass General Brigham Institutional Review Board also reviewed the study (Protocol #2018P001197) and determined that the Brigham and Women's Hospital component of the study was not human subjects research.
#
#
Results
Clinic Demographics
Half (50%) of the clinics were located in the Northeast, whereas 50% were in the West ([Table 1]). Regarding health system membership, half (50%) of the clinics were part of the health care system, whereas the other half were standalone clinics. Next, 50% of the clinics only offered primary care services, whereas the other half were multispecialty clinics. For the number of physicians at these clinics, most clinics (80%) had more than 10 physicians. Lastly, for the total visits per year, 50% of the clinics had less than 100,000 visits, 40% had between 100,001 and 500,000 visits, and 10% had more than 500,000 visits.
#
Medication Safety Test
The mean overall percentage score for the medication safety test was 57%. The range was 30%, the maximum score was 70%, and minimum score was 40%. Clinic J had the lowest overall percentage score, whereas Clinics A and D achieved the highest percentage score ([Table 2]). Clinics performed well in areas of basic decision support such as drug allergy, drug dose (daily), and inappropriate medication combinations ([Fig. 1]). On the contrary, areas of advanced decision support such as drug age, drug laboratory, and drug monitoring were order categories that clinics struggled in. In terms of fatal order performance, the mean score was 98%. Clinic A was the only clinic that did not alert on all the fatal orders within their test, and their fatal order score was 75%.


The order categories with the most variability in performance were: drug age, drug diagnosis, drug dosing, and excessive alerts. In the drug age and drug diagnosis category, percentage scores ranged from 0 to 100% ([Fig. 2]). For the drug dosing categories, both single and daily, only three clinics (Clinics A, G, and H) achieved 100% in both categories. For the rest of the clinics, there were inconsistencies between their implementation of daily and single dosing alerts ([Fig. 2]). Most notably, Clinic J did not have any decision support related to drug dosing, scoring 0% in both categories. For the excessive alerts order category, scores ranged from 50 to 100%, with a higher percentage score indicating that clinics did not alert on these low-priority drug interactions. Clinics E, G, and I scored 100% in this category, indicating that they did not alert on the nuisance orders in their test.


#
Medication Reconciliation Module
Like our findings from the previous evaluation, most clinics could still not perform medication reconciliation electronically. Of the 10 clinics, only one clinic (10%) could perform the medication reconciliation process electronically. For the other nine clinics, the medication reconciliation process was manually performed by either a nurse or medical assistant without triggering any CDS. All EHR vendor products evaluated in this study were Meaningful Use certified to perform medication reconciliation electronically.
#
Clinical Decision Support Functionality Survey
The purpose of the CDS Functionality Survey was to collect information about these clinics' EHR configuration and it consisted of seven questions ([Table 3]). Some of these questions asked about the types of alerts used in the clinic, how providers provide feedback about alerts, and the level of customization that clinics have over their EHR system.
Abbreviations: EHR, electronic health record; IT, information technology.
The first question asked what types of alerts are used at these clinics. Most clinics used interruptive alerts (90%) and noninterruptive alerts (90%). In addition, 80% of the clinics used a combination of interruptive alerts, noninterruptive alerts, and hard stops in their system. One clinic (10%) only used noninterruptive alerts, and another (10%) only used interruptive alerts ([Table 3]). Next, the survey asked clinics whether their EHR system differentiated between high- and low-severity alerts. All clinics (100%) answered “yes” to this question ([Table 2]). Examples of how clinics differentiate between these alerts included using a color-coded system, icons, and signal words such as “Caution” and “Warning.”
The next question asked clinics how they receive information technology (IT) support. Half (50%) of the clinics have in-house IT support, whereas two (20%) receive support through a physician organization. One clinic (10%) received IT support from only their EHR vendor, whereas two clinics (20%) received IT support from both their in-house IT support office as well as their EHR vendor. Next, clinics were asked to provide details on how they give feedback on the alerts in their EHR system. Most clinics (80%) used multiple methods for providing this feedback. The most common methods for delivering feedback included: submitting a help desk or “trouble” ticket to the EHR vendor (60%), submitting a help desk or “trouble” ticket to the clinics' information systems (IS) desk (50%), and 50% of the clinics' EHR vendor involved clinicians in the testing and design of alerts. The least common method was periodic review of alert content (20%).
The next question asked clinics if alerts can be customized at the clinic level (affecting alerts that all users encounter), by role (e.g., residents, attending, specialists), or by an individual. Most clinics (80%) could customize alerts in at least one of these roles, whereas two clinics (20%) could not. Lastly, most clinics (60%) could not customize their knowledge base, whereas 40% of the clinics could.
#
#
Discussion
We performed this evaluation of medication-related CDS using the Ambulatory EHR Evaluation Tool with 10 outpatient clinics. Each of these clinics uses one of the leading outpatient EHR systems, as identified by the Office of the National Coordinator (ONC).[1] We found that basic decision support such as drug allergy checking, and inappropriate medication combination checking were successfully implemented. These clinics performed poorly in areas of advanced decision support like drug laboratory and drug monitoring. In addition, the ability of these clinics' EHR systems to provide decision support for medication reconciliation was largely unavailable, even though most of the EHRs used in this study had been certified through Meaningful Use for this capability. Only one clinic could demonstrate that medication reconciliation could be done electronically. This is clearly an opportunity for improvement, as with good tools this could make reconciliation both more reliable and efficient.
Our first round of tool testing revealed that some clinics had many medication decision support features completely turned off, which is concerning, and this was still the case even after 1 year between evaluations. This was illustrated in Clinics H and J, where they scored 0% in 5 of the 10 order categories in the test. Those order categories were almost all considered advanced decision support features, including drug diagnosis, drug pregnancy, and drug age checking. Among these order categories, drug age had the lowest overall order category score (38%), with more than half the clinics scoring below a 25% and four scoring a 0%. This order category only tests for geriatric alerts, which are based on recommendations from the American Geriatrics Society's Beers Criteria[23] and the Screening Tool of Older Person's Prescriptions (STOPP).[24] In a study in Italy,[25] researchers integrated STOPP into their EHR system and observed a reduction in potentially dangerous medications being prescribed to geriatric patients. Studies like these have mainly been performed in inpatient hospitals, and very few have studied the impact of these alerts in the ambulatory setting. Further use of this assessment can increase awareness of the potential positive impact that drug age alerts and other advanced decision support features can have on the safety and quality of care in outpatient clinics.
Next, the drug dosing categories showed variability in performance, and some clinics did not have any of these alerts in place. These clinics included: Clinic J, which did not have any drug dosing alerts and Clinic E, which did not have any single dosing alerts but had daily dosing alerts. The major difference between these two categories is that the daily dosing category focuses on the frequency at which a medication is administered. In an early study by Gandhi et al, prescribing errors were identified in 7.6% of outpatient prescriptions.[14] The researchers noted that integrating dosing and frequency CDS could have prevented these ADEs. Since this study, inpatient EHR systems have implemented these capabilities, where the average score in the inpatient version of the tool was around 80% for both dosing order categories in 2018.[17] As a result, we did not expect any clinic to have these types of alerts completely off in their EHR system. For Clinic J, the clinician revealed that although their clinical staff are involved in the design of alerts, the degree of customizability of their EHR system and associated medication reference database is quite limited. Moreover, the degree to which these clinics receive IT support was highly variable, as was the source of the IT support (often from vendors only) and the involvement of clinics in the testing and design of alerts. Studies have shown that poor training and technical support can be barriers to adoption and effective use of EHRs.[26] Furthermore, involving clinicians throughout the EHR implementation process can provide valuable feedback in designing these systems, which can have positive effects on quality and safety.[27] Thus, for clinics to implement their medication-related CDS more effectively, a potential solution is to implement more transparent and sustainable processes for submitting feedback and integrating change in response to user feedback between EHR vendors and outpatient clinics.
Another notable result from this evaluation was the implementation of drug laboratory and drug monitoring alerts in two clinics. This was an improvement compared with the first round of piloting, where none of the clinics implemented these types of alerts, although there is obviously still considerable room for improvement. Examples of these alerts include notifying the prescriber about an abnormal laboratory value where the medication or dose may be inappropriate, or an alert that recommends checking a laboratory value in response to a medication being prescribed. The alerts these clinics received were noninterruptive, but given the workflows in these clinics, these alerts signaled for the prescriber to double-check the patients' laboratory values. Implementing these types of drug laboratory alerts in the outpatient setting has been shown to increase the efficiency and completeness of patients' laboratory information.[28] In turn, this can help improve patient safety and ensure that clinicians can be alerted during care transitions to order relevant laboratory tests. In addition, these results show that these more advanced capabilities are configurable and available within outpatient EHR systems. Moreover, in a study by Kripalani et al,[29] it was found that discharge summaries generally lacked important information related to patients' care such as pending laboratory results. Given this and the lack of interoperability between EHR systems, having alerts in place to remind clinicians to check laboratory values is critical for patient safety.
In this evaluation, we also observed variation in performance within EHR vendors. Clinics E and I shared the same EHR configuration. Clinic E's overall score was 63%, whereas Clinic I's overall score was 60%. Order categories in which these clinics performed similarly in, were drug diagnosis, drug laboratory, drug monitoring, and drug pregnancy. Both clinics achieved the same order category scores in these categories, indicating that their system responded similarly to the same test orders. On the contrary, order categories in which their scores differed greatly were drug age and drug dose (single). For drug age, Clinic E scored 100%, whereas Clinic I scored 25% and in drug dose (single) category, Clinic E scored 0% and Clinic I scored 50%. These data indicate that although the overall scores of these clinics did not differ greatly, their order category scores revealed differences in how they implemented their EHR systems; especially since the same test was used throughout this evaluation. Furthermore, these results indicate that EHR systems are also customizable at the clinic level, which most of our clinics could do. This observation is consistent with findings in the inpatient version of the tool, where Holmgren et al[16] and Classen et al[17] have shown that the overall scores in the inpatient test vary greatly within EHR vendors and even within health care systems using the same EHR.
Next, in the medication reconciliation module, the results closely mirrored the results from the initial pilot. Only one clinic could demonstrate that its EHR system could electronically reconcile medications. In contrast, a manual process was performed by either the provider or a medical assistant in the other clinics. Changes between the two medication lists included modifying existing medications' dosages and adding and removing medications. During the debrief conference call, two clinics revealed that their EHR had the capability to perform medication reconciliation electronically, but the functionality is not turned on. One of the clinics noted confusion about the process; thus, it is unused. Statements like this were common throughout the evaluation as well as in the first phase of piloting.[15] This is also consistent with a study conducted at a health system,[30] where researchers collected practitioners' perspectives on their medication reconciliation process. One commonality between these clinicians' experiences is the lack of understanding of the value of performing medication reconciliation within the EHR and how it affects patient safety. Moreover, in the outpatient setting, external medication lists can come from a variety of different sources,[31] therefore further complicating this process, and making it more susceptible to mistakes. Furthermore, the lack of interoperability between EHR systems can be a major barrier in obtaining the most accurate and updated medication lists.[32] These patient safety issues are critical, especially since none of the clinics in the study reported having CDS features such as drug–drug interaction checking implemented into their electronic medication reconciliation processes.
Limitations
Our study has several limitations. First, this final phase of piloting only included 10 outpatient clinics. Thus, the results from the tool may not be representative of all outpatient facilities and EHR systems. However, the results of this evaluation are similar to that of the pilot we conducted during the development of the tool.[15] Basic areas of decision support are mostly implemented; however, more advanced areas are in need of improvement. In addition, the tool has not been heavily validated. However, in the initial development of this tool, and the clinics faced minimal barriers following the methodology used in the tool. Moreover, the methodology for the medication safety test mirrors that of the inpatient version of the test, which has been extensively validated.[16] [17] [18] [19] [20] [21] In addition, processes around medication reconciliation can vary, so this area of this tool needs further research to reflect the capabilities of EHR systems to electronically reconcile medications. Lastly, for the medication safety test, the fatal and nuisance test orders may not be representative of scenarios that all clinics may encounter.
#
#
Conclusion
We evaluated a sample of ambulatory practices around their medication-related decision support and found that advanced capabilities within outpatient EHR systems have yet to be widely implemented. Perhaps the most surprising was the lack of electronic medication reconciliation capability, with only one clinic able to demonstrate this capability despite the meaningful use demonstration of this capability in all the EHR vendors used in this study. The results also showed inconsistencies in the implementation of dosing alerts, where some clinics only had one type of dosing alert implemented, whereas others did not. This evaluation also showed that implementing drug laboratory and drug monitoring alerts is possible within outpatient clinics. This was not the case in the first round of testing, where none of the clinics had these alerts in place. In addition, the results revealed variability in not only order categories but between clinics that had the same EHR configuration. The tool was practical to use and identified substantial opportunities for improvement. These results reinforce the need to assess EHR systems by outpatient clinics and health systems consistently. Thus, as this tool is disseminated more broadly, clinics that decide to take the assessment consistently can use their results as a quality improvement tool to identify changes to the CDS features they may wish to make within their EHR system.
#
Clinical Relevance Statement
The results from this evaluation revealed that advanced CDS features have yet to be widely implemented in ambulatory clinics and there is variability in the implementation of certain alerts. Moreover, with almost all the clinics scoring a 0% in at least one order category, it is critical that clinics participate in repeated evaluations of their EHR's medication-related decision support as they make changes to their systems with the results from the tool.
#
Multiple-Choice Questions
-
Which order category did all clinics score a “100%” in?
-
Drug age
-
Drug monitoring
-
Inappropriate medication combinations
-
Drug allergy
Correct Answer: The correct answer is option d. In this evaluation, all of the clinics alerted on every drug allergy test order in the medication safety test.
-
-
Compared with the first round of piloting, which of the following order categories had the most improvement?
-
Drug allergy
-
Drug pregnancy
-
Drug laboratory
-
Drug age
Correct Answer: The correct answer is option c. In the first round of piloting, the mean score for the drug laboratory was 0%. In this evaluation, the mean score was 10%, where two clinics scored a 50% in this order category.
-
Abbreviation: ICD, International Classification of Diseases.
Basic decision support |
||
Order category |
Description |
Example |
Drug allergy |
Medication is one for which a patient allergy has been documented |
Penicillin prescribed for patient with documented penicillin allergy |
Drug dose (single) |
Specified dose of medication exceeds the safe range for a single dose |
10-fold overdose of digoxin |
Inappropriate medication combinations |
Medication combinations to avoid ordering together or ones to use with caution |
Use of clonazepam and lorazepam together |
Advanced decision support |
||
Order category |
Description |
Example |
Drug dose (daily) |
Cumulative dose for medication exceeds the safe range for daily dose |
Ordering ibuprofen regular dose every three hours |
Drug age |
Medication dose inappropriate/contraindicated based on patient's age |
Prescribing diazepam for a patient over 65 y old |
Drug laboratory |
Medication dose inappropriate/contraindicated based on documented laboratory results (including renal status) |
Use of nitrofurantoin in patient with severe renal failure |
Drug monitoring |
Medication for which the standard of care includes subsequent monitoring of drug level or laboratory value to avoid harm |
Prompt to monitor drug levels when ordering digoxin or INR/PT when ordering warfarin |
Drug diagnosis |
Medication dose inappropriate/contraindicated based on document diagnosis |
Prescribing a nonspecific β-blocker for patient with asthma |
Drug pregnancy |
Medication inappropriate/contraindicated in pregnant patients |
Prescribing atorvastatin for a pregnant patient |
Excessive alerts |
Low-priority medication combinations that should not be presented interruptedly |
Concurrent use of hydrochlorothiazide and captopril |
Abbreviations: INR, international normalized ration; PT, platelet count.
#
#
Conflict of Interest
D.W.B. consults for EarlySense, which makes patient safety monitoring systems. He receives cash compensation from CDI (Negev), Ltd, which is a not-for-profit incubator for health IT startups. He receives equity from ValeraHealth, which makes software to help patients with chronic diseases. He receives equity from Clew, which makes software to support clinical decision-making in intensive care. He receives equity from MDClone, which takes clinical data and produces deidentified versions of it. He receives equity from AESOP, which makes software to reduce medication error rates. He will be receiving research funding from IBM Watson Health. D.W.B.'s financial interests have been reviewed by Brigham and Women's Hospital and Mass General Brigham in accordance with their institutional policies. All other authors have no conflict of interests to declare.
Protection of Human and Animal Subjects
No real patients were used in the testing scenarios in the Ambulatory EHR Evaluation Tool, only test patients were used.
-
References
- 1 Office of the National Coordinator for Health Information Technology. Office-based Physician Electronic Health Record Adoption. Health IT Quick-Stat #50. January 2019. Accessed August 4, 2021 at: https://dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php
- 2 Blumenthal D. Launching HITECH. N Engl J Med 2010; 362 (05) 382-385
- 3 Bates DW, Teich JM, Lee J. et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999; 6 (04) 313-321
- 4 Kuperman GJ, Bobb A, Payne TH. et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007; 14 (01) 29-40
- 5 Radley DC, Wasserman MR, Olsho LE, Shoemaker SJ, Spranca MD, Bradshaw B. Reduction in medication errors in hospitals due to adoption of computerized provider order entry systems. J Am Med Inform Assoc 2013; 20 (03) 470-476
- 6 Tajchman S, Lawler B, Spence N, Haque S, Quintana Y, Ateya M. Implementation and use of risk evaluation and mitigation strategies programs in practice: a scoping review of the literature. Appl Clin Inform 2022; 13 (05) 1151-1160
- 7 Bates DW, Teich JM, Lee J. et al. The Impact of Computerized Physician Order Entry on Medication Error Prevention. Vol 6. Accessed April 18, 2019 at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC61372/pdf/0060313.pdf
- 8 Austin JA, Barras MA, Woods LS, Sullivan CM. The effect of digitization on the safe management of anticoagulants. Appl Clin Inform 2022; 13 (04) 845-856
- 9 E Dawson T, Beus J, W Orenstein E, Umontuen U, McNeill D, Kandaswamy S. Reducing therapeutic duplication in inpatient medication orders. Appl Clin Inform 2023; 14 (03) 538-543
- 10 Chaparro JD, Beus JM, Dziorny AC. et al. Clinical decision support stewardship: best practices and techniques to monitor and improve interruptive alerts. Appl Clin Inform 2022; 13 (03) 560-568
- 11 Nanji KC, Slight SP, Seger DL. et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014; 21 (03) 487-491
- 12 Sarkar U, López A, Maselli JH, Gonzales R. Adverse drug events in U.S. adult ambulatory medical care. Health Serv Res 2011; 46 (05) 1517-1533
- 13 Kaushal R, Kern LM, Barrón Y, Quaresimo J, Abramson EL. Electronic prescribing improves medication safety in community-based office practices. J Gen Intern Med 2010; 25 (06) 530-536
- 14 Gandhi TK, Weingart SN, Seger AC. et al. Outpatient prescribing errors and the impact of computerized prescribing. J Gen Intern Med 2005; 20 (09) 837-841
- 15 Co Z, Holmgren AJ, Classen DC. et al. The development and piloting of the ambulatory electronic health record evaluation tool: lessons learned. Appl Clin Inform 2021; 12 (01) 153-163
- 16 Holmgren AJ, Co Z, Newmark L, Danforth M, Classen D, Bates D. Assessing the safety of electronic health records: a national longitudinal study of medication-related decision support. BMJ Qual Saf 2020; 29 (01) 52-59
- 17 Classen DC, Holmgren AJ, Co Z. et al. National trends in the safety performance of electronic health record systems from 2009 to 2018. JAMA Netw Open 2020; 3 (05) e205547
- 18 Kilbridge PM, Welebob EM, Classen DC. Development of the Leapfrog methodology for evaluating hospital implemented inpatient computerized physician order entry systems. Qual Saf Health Care 2006; 15 (02) 81-84
- 19 Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010; 29 (04) 655-663
- 20 Co Z, Holmgren AJ, Classen DC. et al. The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication-related clinical decision support. J Am Med Inform Assoc 2020; 27 (08) 1252-1258
- 21 Leung AA, Keohane C, Lipsitz S. et al. Relationship between medication event rates and the Leapfrog computerized physician order entry evaluation tool. J Am Med Inform Assoc 2013; 20 (e1): e85-e90
- 22 Phansalkar S, van der Sijs H, Tucker AD. et al. Drug-drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc 2013; 20 (03) 489-493
- 23 By the 2019 American Geriatrics Society Beers Criteria® Update Expert Panel. American Geriatrics Society 2019 Updated AGS Beers Criteria® for potentially inappropriate medication use in older adults. J Am Geriatr Soc 2019; 67 (04) 674-694
- 24 O'Mahony D. STOPP/START criteria for potentially inappropriate medications/potential prescribing omissions in older people: origin and progress. Expert Rev Clin Pharmacol 2020; 13 (01) 15-22
- 25 Grion AM, Gallo U, Tinjala DD. et al. A new computer-based tool to reduce potentially inappropriate prescriptions in hospitalized geriatric patients. Drugs Aging 2016; 33 (04) 267-275
- 26 Tsai CH, Eghdam A, Davoody N, Wright G, Flowerday S, Koch S. Effects of electronic health record implementation and barriers to adoption and use: a scoping review and qualitative analysis of the content. Life (Basel) 2020; 10 (12) 1-27
- 27 McCrorie C, Benn J, Johnson OA, Scantlebury A. Staff expectations for the implementation of an electronic health record system: a qualitative study using normalisation process theory. BMC Med Inform Decis Mak 2019; 19 (01) 222
- 28 Staes CJ, Evans RS, Rocha BHSC. et al. Computerized alerts improve outpatient laboratory monitoring of transplant patients. J Am Med Inform Assoc 2008; 15 (03) 324-332
- 29 Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA 2007; 297 (08) 831-841
- 30 Rangachari P, Dellsperger KC, Fallaw D. et al. A mixed-method study of practitioners' perspectives on issues related to EHR medication reconciliation at a health system. Qual Manag Health Care 2019; 28 (02) 84-95
- 31 Powis M, Dara C, Macedo A. et al. Implementation of medication reconciliation in outpatient cancer care. BMJ Open Qual 2023; 12 (02) e002211
- 32 Yuan CT, Dy SM, Yuanhong Lai A. et al. Challenges and strategies for patient safety in primary care: a qualitative study. Am J Med Qual 2022; 37 (05) 379-387
Address for correspondence
Publication History
Received: 11 July 2023
Accepted: 24 October 2023
Article published online:
13 December 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Office of the National Coordinator for Health Information Technology. Office-based Physician Electronic Health Record Adoption. Health IT Quick-Stat #50. January 2019. Accessed August 4, 2021 at: https://dashboard.healthit.gov/quickstats/pages/physician-ehr-adoption-trends.php
- 2 Blumenthal D. Launching HITECH. N Engl J Med 2010; 362 (05) 382-385
- 3 Bates DW, Teich JM, Lee J. et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999; 6 (04) 313-321
- 4 Kuperman GJ, Bobb A, Payne TH. et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007; 14 (01) 29-40
- 5 Radley DC, Wasserman MR, Olsho LE, Shoemaker SJ, Spranca MD, Bradshaw B. Reduction in medication errors in hospitals due to adoption of computerized provider order entry systems. J Am Med Inform Assoc 2013; 20 (03) 470-476
- 6 Tajchman S, Lawler B, Spence N, Haque S, Quintana Y, Ateya M. Implementation and use of risk evaluation and mitigation strategies programs in practice: a scoping review of the literature. Appl Clin Inform 2022; 13 (05) 1151-1160
- 7 Bates DW, Teich JM, Lee J. et al. The Impact of Computerized Physician Order Entry on Medication Error Prevention. Vol 6. Accessed April 18, 2019 at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC61372/pdf/0060313.pdf
- 8 Austin JA, Barras MA, Woods LS, Sullivan CM. The effect of digitization on the safe management of anticoagulants. Appl Clin Inform 2022; 13 (04) 845-856
- 9 E Dawson T, Beus J, W Orenstein E, Umontuen U, McNeill D, Kandaswamy S. Reducing therapeutic duplication in inpatient medication orders. Appl Clin Inform 2023; 14 (03) 538-543
- 10 Chaparro JD, Beus JM, Dziorny AC. et al. Clinical decision support stewardship: best practices and techniques to monitor and improve interruptive alerts. Appl Clin Inform 2022; 13 (03) 560-568
- 11 Nanji KC, Slight SP, Seger DL. et al. Overrides of medication-related clinical decision support alerts in outpatients. J Am Med Inform Assoc 2014; 21 (03) 487-491
- 12 Sarkar U, López A, Maselli JH, Gonzales R. Adverse drug events in U.S. adult ambulatory medical care. Health Serv Res 2011; 46 (05) 1517-1533
- 13 Kaushal R, Kern LM, Barrón Y, Quaresimo J, Abramson EL. Electronic prescribing improves medication safety in community-based office practices. J Gen Intern Med 2010; 25 (06) 530-536
- 14 Gandhi TK, Weingart SN, Seger AC. et al. Outpatient prescribing errors and the impact of computerized prescribing. J Gen Intern Med 2005; 20 (09) 837-841
- 15 Co Z, Holmgren AJ, Classen DC. et al. The development and piloting of the ambulatory electronic health record evaluation tool: lessons learned. Appl Clin Inform 2021; 12 (01) 153-163
- 16 Holmgren AJ, Co Z, Newmark L, Danforth M, Classen D, Bates D. Assessing the safety of electronic health records: a national longitudinal study of medication-related decision support. BMJ Qual Saf 2020; 29 (01) 52-59
- 17 Classen DC, Holmgren AJ, Co Z. et al. National trends in the safety performance of electronic health record systems from 2009 to 2018. JAMA Netw Open 2020; 3 (05) e205547
- 18 Kilbridge PM, Welebob EM, Classen DC. Development of the Leapfrog methodology for evaluating hospital implemented inpatient computerized physician order entry systems. Qual Saf Health Care 2006; 15 (02) 81-84
- 19 Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010; 29 (04) 655-663
- 20 Co Z, Holmgren AJ, Classen DC. et al. The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication-related clinical decision support. J Am Med Inform Assoc 2020; 27 (08) 1252-1258
- 21 Leung AA, Keohane C, Lipsitz S. et al. Relationship between medication event rates and the Leapfrog computerized physician order entry evaluation tool. J Am Med Inform Assoc 2013; 20 (e1): e85-e90
- 22 Phansalkar S, van der Sijs H, Tucker AD. et al. Drug-drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc 2013; 20 (03) 489-493
- 23 By the 2019 American Geriatrics Society Beers Criteria® Update Expert Panel. American Geriatrics Society 2019 Updated AGS Beers Criteria® for potentially inappropriate medication use in older adults. J Am Geriatr Soc 2019; 67 (04) 674-694
- 24 O'Mahony D. STOPP/START criteria for potentially inappropriate medications/potential prescribing omissions in older people: origin and progress. Expert Rev Clin Pharmacol 2020; 13 (01) 15-22
- 25 Grion AM, Gallo U, Tinjala DD. et al. A new computer-based tool to reduce potentially inappropriate prescriptions in hospitalized geriatric patients. Drugs Aging 2016; 33 (04) 267-275
- 26 Tsai CH, Eghdam A, Davoody N, Wright G, Flowerday S, Koch S. Effects of electronic health record implementation and barriers to adoption and use: a scoping review and qualitative analysis of the content. Life (Basel) 2020; 10 (12) 1-27
- 27 McCrorie C, Benn J, Johnson OA, Scantlebury A. Staff expectations for the implementation of an electronic health record system: a qualitative study using normalisation process theory. BMC Med Inform Decis Mak 2019; 19 (01) 222
- 28 Staes CJ, Evans RS, Rocha BHSC. et al. Computerized alerts improve outpatient laboratory monitoring of transplant patients. J Am Med Inform Assoc 2008; 15 (03) 324-332
- 29 Kripalani S, LeFevre F, Phillips CO, Williams MV, Basaviah P, Baker DW. Deficits in communication and information transfer between hospital-based and primary care physicians: implications for patient safety and continuity of care. JAMA 2007; 297 (08) 831-841
- 30 Rangachari P, Dellsperger KC, Fallaw D. et al. A mixed-method study of practitioners' perspectives on issues related to EHR medication reconciliation at a health system. Qual Manag Health Care 2019; 28 (02) 84-95
- 31 Powis M, Dara C, Macedo A. et al. Implementation of medication reconciliation in outpatient cancer care. BMJ Open Qual 2023; 12 (02) e002211
- 32 Yuan CT, Dy SM, Yuanhong Lai A. et al. Challenges and strategies for patient safety in primary care: a qualitative study. Am J Med Qual 2022; 37 (05) 379-387



