RSS-Feed abonnieren
DOI: 10.1055/s-0043-1770900
A Multiyear Survey Evaluating Clinician Electronic Health Record Satisfaction
- Abstract
- Background and Significance
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Objectives We assessed how clinician satisfaction with a vendor electronic health record (EHR) changed over time in the 4 years following the transition from a homegrown EHR system to identify areas for improvement.
Methods We conducted a multiyear survey of clinicians across a large health care system after transitioning to a vendor EHR. Eligible clinicians from the first institution to transition received a survey invitation by email in fall 2016 and then eligible clinicians systemwide received surveys in spring 2018 and spring 2019. The survey included items assessing ease/difficulty of completing tasks and items assessing perceptions of the EHR's value, usability, and impact. One item assessing overall satisfaction and one open-ended question were included. Frequencies and means were calculated, and comparison of means was performed between 2018 and 2019 on all clinicians. A multivariable generalized linear model was performed to predict the outcome of overall satisfaction.
Results Response rates for the surveys ranged from 14 to 19%. The mean response from 3 years of surveys for one institution, Brigham and Women's Hospital, increased for overall satisfaction between 2016 (2.85), 2018 (3.01), and 2019 (3.21, p < 0.001). We found no significant differences in mean response for overall satisfaction between all responders of the 2018 survey (3.14) and those of the 2019 survey (3.19). Systemwide, tasks rated the most difficult included “Monitoring patient medication adherence,” “Identifying when a referral has not been completed,” and “Making a list of patients based on clinical information (e.g., problem, medication).” Clinicians disagreed the most with “The EHR helps me focus on patient care rather than the computer” and “The EHR allows me to complete tasks efficiently.”
Conclusion Survey results indicate room for improvement in clinician satisfaction with the EHR. Usability of EHRs should continue to be an area of focus to ease clinician burden and improve clinician experience.
#
Background and Significance
For many years, electronic health record (EHR) use was infrequent in the United States. However, the 2009 Health Information Technology for Economic and Clinical Health Act offered major incentives for hospitals and medical offices to adopt EHRs. Adoption was swift, and by 2019, over 90% of U.S. acute care hospitals were using certified EHRs with most of the market share dominated by a few major companies,[1] and in the outpatient setting adoption also reached high levels, though with many more vendors involved. The advantages of EHRs have been documented widely such as improvements in quality of clinical notes, legibility, access to patient data, and reduction of errors.[2] [3] [4] [5] On the other hand, concerns regarding physician burnout, errors due to poor usability, and alert fatigue persist, and the impact of these on clinicians and patient care has been assessed in many prior studies.[6] [7] [8]
Usability can be formally measured. For example, the System Usability Scale (SUS) can be used to evaluate EHRs and determine users' perspective on the EHR's ease of use.[9] In one study with 870 physicians evaluating their EHR, the mean (± standard deviation) SUS score was 45.9 ± 21.9 and fell under the “grade F” or “not acceptable” range for usability.[10] In an assessment of 27 vendor-reported SUS scores, there were no statistical improvements in EHR scores from 2014 to 2015, with SUS decreasing for 44% of vendors between the 2 years.[11]
Mass General Brigham (MGB, formerly Partners Healthcare) began its transition from an internally developed EHR to Epic in 2015. Challenges in transitioning from one EHR to another have been studied and recommendations to ease those transitions have been made, but satisfaction and clinician burden remain an issue.[12] [13] Some have hypothesized that there may be an initial decline in satisfaction during the implementation period of EHRs but that it then increases over time, although few empiric evaluations have been performed.[14] Several studies have assessed satisfaction months after the transition up to a couple of years after, but few have continued long term.[15] [16] Hanauer et al's study demonstrated the challenges of successful EHR adoption and monitoring physician satisfaction.[14] The authors conducted a 2-year longitudinal assessment of physician perceptions after a transition from a homegrown EHR to a vendor EHR at the University of Michigan Health System. They hypothesized that measures of physician perception would follow a J-curve pattern, with an initial decrease in satisfaction followed by a gradual rise back to and potentially above baseline, which typically indicates successful technology adoption. However, they were unable to discover a J-curve pattern for any of the measures of physician perception as many of the measures either followed a U-curve, L-curve, or flatlined. Another study by Krousel-Wood et al found that while positive clinician perceptions significantly increased for items such as long-term follow-up for patient communication and satisfaction with system reliability, items such as overall satisfaction, clinical decision quality, productivity, and monitoring patients significantly decreased over time (p < 0.05 for each).[12]
Monitoring clinician satisfaction and perceptions of the EHR is one way to engage clinicians and prioritize improvements to the EHR.[17] The Rhode Island Department of Health found that physicians who agreed with the sentiment that using an EHR added to their daily frustration also had 2.4 times the odds of burnout in comparison to the physicians who disagreed.[18] Additionally, of the 1,792 physicians who responded to the survey, 70% of the EHR users reported health information technology-related stress.[18] A physician's self-perceived efficacy while using an EHR was found to be the factor most predictive of physician satisfaction and patient impact.[19] A recent cross-sectional survey conducted on the relationship between EHR use and physician burnout revealed that 62.5% (110/176) of physicians felt that the EHR added to their daily frustration.[20] A systematic review confirmed that a lack of available time for documentation, increased inbox or patient call message volume, and clinicians' negative perceptions of EHRs resulted in higher rates of clinician burnout.[21] Clinicians who have increased dissatisfaction with their EHR systems may also have lower patient satisfaction.[22]
To address how satisfaction changes over time with a vendor EHR, we administered a satisfaction survey at a large health care system over a 4-year period. We also identified areas within the EHR that might be improved to increase clinician satisfaction.
#
Methods
We conducted a longitudinal assessment of survey data gathered between 2016 and 2019 at an academic medical center health system. The health system began its transition to a commercial health record (EpicCare, 2010, Madison, Wisconsin, United States) in 2015 beginning with Brigham and Women's Hospital (BWH) and rolled it out across the entire system over a two-year period, replacing an internally developed medical record, which was in place in both the inpatient and outpatient setting. The Institutional Review Board reviewed the study and designated it as a quality improvement project not requiring formal clinician consent. Clinicians were able to decline participation in the survey or stop taking it at any time.
Sample Recruitment and Survey Distribution
Clinicians were eligible to receive a survey if they were an active user of the EHR and a physician (MD or DO), nurse practitioner, or physician assistant in either the inpatient or outpatient setting. All levels of physicians (residents, attendings, interns) were invited to participate. A survey at one institution only, BWH, was administered in 2016. The following surveys were conducted during the spring 2018 and the spring 2019 and were administered to the entire population of MGB credentialed clinicians. Eligible clinicians across the health care system were emailed a link to the survey. Three reminder emails were then sent to nonresponders every other week for 6 weeks. The 2019 survey was administered using REDCap electronic data capture tool hosted at BWH.[23] [24] The 2016 and 2018 surveys were developed in Limesurvey.[25]
The survey administered to BWH physicians was 1.5 years postimplementation of Epic. By the time the next survey was administered in 2018, all MGB sites had completed their transition to Epic. Participants did not receive incentive payment for completing the survey. Survey responses were kept confidential; no identifying information was reported. They were not anonymous since we captured email address to track participants to send survey reminders and link responses back to individuals for analysis.
#
Survey Development
The survey instrument was developed using the Primary Care Information Project survey and Family Practice Management survey as well as the original survey developed by the research group and subject matter experts that was used for 6 years to assess satisfaction with the internally developed electronic record at MGB.[26] [27] Validated usability surveys were considered during development but were not sufficient to address the broader user experience. We worked with the EHR leadership and subject matter experts to customize the survey to address tasks and features important to the health system, which would allow us to identify specific areas for improvement. The survey included 48 items split into seven matrix questions, one multiple-choice rating question and one open-ended question ([Supplementary Appendix A], available in online version). The survey included items assessing ease/difficulty of completing tasks on a scale (“Very easy [5],” “Easy [4],” “Neutral [3],” “Difficult [2],” “Very difficult [1],” or “Not applicable”) in the following categories: reviewing patient data, documentation, patient engagement, task and workflow management, and preventive care and panel management. In addition, it included items assessing perceptions of the EHR's value, usability, and impact on workflow and patient care. Participants rated their agreement or disagreement with these statements by responding “Strongly agree (5),” “Agree (4),” “Neutral (3),” “Disagree (2),” “Strongly disagree (1),” or “Not applicable.” One item assessing overall satisfaction on a scale from “Very satisfied” (6) to “Very dissatisfied” (1) was included in addition to one open-ended question allowing clinicians to share any additional information about the EHR. [Table 1] identifies the task items included in the survey instrument.
Abbreviations: CPT, Current Procedural Terminology; EHR, electronic health record; ICD-10, International Classification of Diseases.
#
Analysis
We used SAS version 9.4 for the statistical analysis (Cary, North Carolina, United States). Frequencies and means (95% confidence intervals [CIs]) were calculated for each survey question for all years. A comparison of means was done between 2016, 2018, and 2019 BWH responses and for all systemwide responses between 2018 and 2019. A multivariable generalized linear model was performed to predict the outcome of overall satisfaction on a numeric scale. We controlled for clustering by clinician since we had some clinicians who responded to surveys in multiple years; we needed to account for this in looking at the influence of the participant characteristics. To account for missing values, multiple imputation was also performed. All covariate variables were entered into the overall satisfaction model followed by a stepwise selection technique to achieve the final overall satisfaction model.
A content analysis was conducted on the open-ended comments. Each comment was assigned a code. Similar codes were grouped into categories by subject matter. Categories with the most frequent feedback are reported in this study. Representative quotes were identified for each category of feedback.
#
#
Results
The response rate systemwide was similar for 2018 and 2019 at 16 and 14%, respectively, and 19% for BWH surveyed in 2016. Of the responders, most were primarily affiliated with BWH and BWH–Faulkner Hospitals (2018: 39%, 2019: 33%) or Mass General Hospital (2018: 43%, 2019: 40%). There were slightly more female than male responders in both years, which was representative of the overall clinician population ([Table 2]).
2018 Survey (N = 1,613) (n, %) |
2019 Survey (N = 1,632) (n, %) |
|
---|---|---|
Primary affiliation |
||
Massachusetts General Hospital |
688 (43) |
646 (40) |
Brigham and Women's and Faulkner Hospitals |
634 (39) |
538 (33) |
Newton Wellesley Hospital |
100 (6) |
119 (7) |
Northshore Medical Center |
85 (5) |
65 (4) |
McLean Hospital |
20 (1) |
67 (4) |
Other community[a] |
86 (5) |
197 (12) |
Clinician type |
||
Specialists |
944 (59) |
970 (59) |
Primary care clinicians |
270 (17) |
286 (18) |
Nurse practitioners |
193 (12) |
169 (10) |
Residents |
132 (8) |
124 (8) |
Physician assistants |
71 (4) |
79 (5) |
Other type |
3 (<1) |
4 (<1) |
Gender |
||
Male |
764 (47) |
791 (48) |
Female |
849 (53) |
841 (52) |
How long have you been using Epic? |
||
0–3 mo |
8 (<1) |
11 (1) |
4–6 mo |
56 (3) |
30 (2) |
7–12 mo |
174 (11) |
54 (3) |
1–3 y |
1,186 (74) |
648 (40) |
> 3 y |
187 (12) |
871 (53) |
Missing |
2 (<1) |
18 (1) |
a Includes other community hospitals affiliated with Mass General Brigham.
Overall Satisfaction
The responses from the 3 years of surveys for BWH showed a slight increase in mean overall satisfaction over time from 2016 (2.85, 95% CI: 2.71, 2.99) to 2019 (3.21, 95% CI: 3.07, 3.34; p < 0.001). Systemwide, however, there was no significant difference (p = 0.2030) in mean response for overall satisfaction between responders of the 2018 survey (3.14, 95% CI: 3.06, 3.21) and those of the 2019 survey (3.19, 95% CI: 3.12, 3.27). Overall, 46% of responders in 2018 expressed some level of satisfaction (somewhat satisfied, satisfied, very satisfied) and the 2019 survey showed similar levels of satisfaction (47%). In 2018 and 2019, 19% of clinicians responded that they were very dissatisfied with the EHR ([Table 3]). [Fig. 1] shows the mean overall satisfaction over 3 years for BWH-only clinicians and over 2 years for all systemwide clinicians.


#
Satisfaction by Task
[Table 4] includes results of all survey items for clinicians systemwide in 2018 to 2019. In both years, responders rated 15 out of 33 (45%) tasks as difficult or very difficult (average score below 3.0).
Abbreviations: CI, confidence interval; EHR, electronic health record.
Note: Participants had the option of choosing “Very easy (5),” “Easy (4),” “Neutral (3),” “Difficult (2),” “Very difficult (1),” or “Not applicable.” Participants rated their agreement or disagreement with the value, perceived usability, and impact on patient care and workflow statements by responding “Strongly agree (5),” “Agree (4),” “Neutral (3),” “Disagree (2),” “Strongly disagree (1),” or “Not applicable.”
There was no significant difference in the mean response between 2018 and 2019 for the two items related to Reviewing Patient Data. “Reviewing any health changes since you last saw the patient” was rated as difficult with a mean of 2.67 in both years.
When asked to rate the ease or difficulty of tasks related to documentation from very easy (5) to very difficult (1), clinicians reported a more difficult time keeping problems lists updated in 2019 (2.69) versus 2018 (2.78, p = 0.03). Documentation tasks with a mean rating above a 3.5 in both years included “creating the visit note” and “documenting allergies.”
For tasks related to patient engagement the results indicate they had a significantly easier time communicating with patients in 2019 (3.13) than 2018 (3.05, p = 0.03). In both 2018 and 2019, “Providing patients with an electronic previsit form” (2.50, 2.52) and “Incorporating patients' requests for changes to their health record” were more consistently rated as difficult (2.45, 2.43).
In the area of Task and Workflow Management, there were several significant mean differences: “reviewing laboratory results” got easier over time (2018: 3.55, 2019: 3.63, p = 0.01); “identifying when a referral has not been completed” was significantly harder (p < 0.0001) in 2018 (2.02) versus 2019 (2.30); and “communicating with other clinicians and office staff” got easier (2018: 3.03, 2019: 3.16, p = 0.0002). The highest rated task for ease on average in both 2018 and 2019 was “reviewing radiology results” (3.68, 3.70). In both 2018 and 2019, the two tasks that were rated the most difficult were “monitoring patient medication adherence” (1.99, 2.06) and “identifying when a referral has not been completed” (2.02, 2.30).
In the area of Preventive Care and Panel Management, mean response to “identifying preventive care services,” “ordering appropriate preventive care services during the visit,” and “making a list of patients based on clinical information” were significant (p < 0.05) between 2018 (2.85, 3.01, 2.12) and 2019 (3.02, 3.10, 2.31), showing a positive increase in ease of completing the task.
Several survey items where participants rated their level of agreement (“Strongly agree (5),” “Agree (4),” “Neutral (3),” “Disagree (2),” “Strongly disagree (1),” or “Not applicable”) with statements related to value, perceived usability, and impact on patient care and workflow showed significant improvement from 2018 to 2019: “I can find information I need easily with this EHR” (2.51 vs. 2.60, p = 0.008), “the information in the EHR is presented in a useful format” (2.45 vs. 2.52, p = 0.05), “the EHR improves quality of patient care” (2.63 vs. 2.80, p < 0.0001), “the EHR helps prevent medical errors” (2.90 vs. 3.01, p < 0.001), “the EHR provides valuable decision support” (2.53 vs. 2.60, p = 0.02), and “the EHR helps provide preventive care” (2.69 vs. 2.80, p = 0.01). The strongest disagreement in both years was with the statement “the EHR helps me focus on patient care rather than the computer” (1.66, 1.70). The strongest agreement (above 3.5) was with the statement, “there are too many alerts and reminders in the EHR.”
#
Multivariable Model
The first iteration of the multivariable model included only 49% of the sample due to missingness across the many covariates. Consequently, we imputed the missing covariates to better complete the dataset for the multivariable model. The results of this imputed model adjusted for clustering by clinician ([Supplementary Appendix B], available in online version) showed that there was no difference in overall satisfaction between 2018 and 2019 for clinicians systemwide, although other variables were significant predictors. Some significant positive predictors of overall satisfaction included the survey question “The EHR is easy to use” (a one-point increase in agreement that the EHR is easy to use increased the mean overall satisfaction score by 8.12%, p < 0.0001). Other items that were positive predictors of an increase in overall satisfaction included “The EHR improves the quality of patient care” (6.6% increase, p < 0.0001) and “The information in the EHR is presented in a useful format” (4.98% increase, <0.0001). Variables that decreased overall satisfaction included a higher level of agreement with “The EHR disrupts the way I normally like to do my work” (1.88% decrease, p < 0.0001) and “There are too many alerts and reminders in the EHR” (1.44% decrease, p < 0.001).
#
Clinician Comments
The most frequent comments systemwide from the open-ended question in the 2019 survey could be grouped into the following categories: overall usability, flexibility of use and customization, implementation-specific user experience, training and support, in-basket, medications, and problem list/problem-based charting. [Table 5] includes representative quotes from responders in each category.
Abbreviation: EHR, electronic health record.
Comments regarding overall usability included general statements that the EHR is cumbersome, the user interface is not pleasing, and completing tasks takes too long and requires too many clicks. Flexibility of use and customization included comments regarding a lack of features available to support clinicians in their specific specialty and challenges in easily accessing frequently used values and actions. Providers commented on Implementation-specific experiences such as challenges accessing historical data, printing issues, and a lack of standardization of communication and documentation methods among providers. Comments regarding training and support included a desire for more training, additional support, and quicker resolutions to support tickets. Three specific areas of the EHR were commented on more than others: In-basket, medications, and problem list and problem-based charting. Providers had challenges with organization, redundancies, and search of in-basket. In addition, many comments centered around medications, specifically errors and challenges organizing the medication list, issues accessing historical prescription information, and irrelevant medication alerts. Finally, many providers commented on the problem list and problem-based charting, citing issues with the interface and usability of problem-based charting and challenges using the EHR to maintain and update problem lists.
#
#
Discussion
In this multiyear survey of clinicians at a large health care system, satisfaction with the EHR has remained low. Less than half of the clinicians in each of the 3 years responded as very satisfied, satisfied, or somewhat satisfied with the EHR overall. The ease/difficulty of completing most tasks did not change significantly between 2016 and 2019. Results of the multivariable analysis indicated that overall satisfaction increased the most as agreement with “The EHR is easy to use” increased. This is the case despite regular vendor updates.
After many years with a homegrown EHR, the transition to Epic was an emotionally charged change for many clinicians. The previous internally developed EHR had been in place many years and was highly functioning and customizable with significant direction and feedback from physician users driving development and design. Many clinicians were involved in the decision to choose a single-vendor system to replace the existing homegrown medical record and the multiple ancillary systems, though it was a fraction of all clinicians in the integrated delivery system. Shifting to a commercial record was challenging for the clinicians in many ways, perhaps especially in that usability was perceived as lower, despite upgrades postimplementation. In their review, Huang et al identified several challenges in transitioning from one EHR to another, including financial considerations, clinician expectations, and patient safety considerations.[13] Recognizing that change is difficult, they suggest attempting to “manage expectations and provide additional training.” In response to dissatisfaction with the EHR, institutions across the health system launched efforts to address EHR burden and usability. In addition to addressing usability issues through enhancements to the vendor EHR, 1-hour one-on-one training sessions through a third party addressed clinician challenges with documentation, clinical review, orders, and in-basket management primarily, which were well received but only utilized by 33% of providers at BWH. Still, some tasks are difficult to accomplish, and designing applications with good usability is a better strategy overall than offering more training. Other strategies that were used in the network included implementing physician advisory committees to provide feedback on EHR issues, and also development of scribe programs in some specialties and voice recognition to improve clinicians' experiences with the EHR. Future work would include assessing the impact of additional training and other programs on physician satisfaction.
The few years after the transition showed little improvement in clinicians' overall satisfaction with the EHR. While a difficult period postimplementation is the rule, overall satisfaction remained low, and several tasks were rated somewhat difficult. Anecdotally, some treatments that come up relatively infrequently such as ordering outpatient transfusions are especially hard and finding how things like this can be done can be challenging. Other tasks such as renewing a prescription when covering for another physician can be difficult at times. Such issues with usability of EHRs is a source of dissatisfaction among clinicians and, in comparison to other commonly used applications, they rate significantly lower on usability.[10]
Several efforts have been reported in the literature describing recommendations to address some of the pain points identified in this survey. Many have sought to address the challenges of monitoring medication interactions, medication reconciliation, and medication ordering with EHRs.[28] [29] [30] [31] Some studies have suggested using different approaches to improve usability. For example, one study highlighted the potential of a redesign focusing on indications-based prescribing to ease this burden, which rated high on usability.[31] Interventions designed to address usability issues related to allergy documentation have shown potential to improve clinician satisfaction as well.[32] Clinical decision support systems attempted to address the design, usability and alert fatigue associated with some aspects of the EHR.[30] [32] [33] [34] One institution was successful in reducing alerts and associated clicks through an initiative to reduce burnout by optimizing clinical decision support.[35] Other initiatives focused on training and evaluation have shown promise.[36] However, the changes introduced by the vendor during this interval have been more incremental to date.
The Office of the National Coordinator has required that vendors follow a user-centered design process to be certified. The certification requirement for usability of an EHR vendor does not include the site-specific EHR configuration, which has created a usability reality gap.[37] There are challenges in improving the usability of commercial systems because options are limited for the institution by the structure of the EHR, and customizations and implementation configurations can have an impact on the usability of the system.[38] Regardless, there still appears to be several usability concerns that need to be addressed by the vendors in partnership with their users.[39] It is clear that we need to continue to improve EHRs so they function better for clinicians,[40] and doing this is likely to require more than minor incremental improvements. Other analytical approaches such as AB testing, where you test two design options to identify which one results in better performance, should also be leveraged—this is routine in other industries.[41] [42] [43]
Limitations of this study included the low response rates, though we did hear from a substantial number of clinicians who expressed consistent concerns. The response rate was likely low because many providers were not practicing clinically, but we were not able to exclude them. The EHR vendor was also conducting surveys during these years (though these were not released to frontline users), which may have led to survey fatigue. This was the experience of one health system and one EHR implementation, though other studies have reported similar results. Also, while many of our results were statistically significant, they are not necessarily clinically significant. We did not examine other factors contributing to satisfaction such as training, organizational culture, or frequency of use and prior experience with the EHR so future research should focus on the broader user experience and sociotechnical factors. We plan to continue the survey effort with a focus on shorter surveys to understand how specific changes made to the EHR impact clinician satisfaction.
#
Conclusion
In a series of surveys assessing clinician satisfaction with a commercial EHR, we found a consistently moderate level of satisfaction, as well as several tasks that clinicians rated as somewhat difficult. Improvements in vendor EHR usability and site implementation should continue to be a focus area to reduce clinician burden and improve satisfaction and should include more substantial changes than they have to date.
#
Clinical Relevance Statement
Capturing clinician experience with the EHR over time can help assess the success or failure of changes made to address usability and clinician burden. This is important for increasing clinician effectiveness, efficiency, and safety in practice, potentially leading to better patient care and more satisfied clinicians.
#
Multiple-Choice Questions
-
What variable was the most significant positive predictor of overall satisfaction with the EHR?
-
Agreement with the statement “The EHR disrupts the way I normally like to do my work.”
-
Agreement with the statement “The EHR is easy to use.”
-
Agreement with the statement “The EHR provides valuable decision support.”
-
Ease of “Creating the visit note.”
The correct answer is option b. Clinician agreement with the statement “The EHR is easy to use” was the most significant positive predictor of overall satisfaction with the EHR. A one-point increase in agreement that the EHR is easy to use increased the mean overall satisfaction score by 8.1%.
-
-
How did clinician overall satisfaction change over time for responders of BWH compared with the change for clinicians systemwide?
-
Satisfaction remained the same for BWH clinicians and clinicians systemwide.
-
Satisfaction remained the same for BWH clinicians and increased significantly for providers systemwide.
-
Satisfaction increased slightly for BWH clinicians and did not increase significantly for providers systemwide.
-
Satisfaction decreased slightly for BWH clinicians and increased slightly for clinicians systemwide.
The correct answer is option c. Satisfaction from 2016 to 2019 for BWH responders showed a slight but significant increase from 2.85 to 3.21 (p = 0.0002). For clinicians systemwide, there was no significant change in satisfaction.
-
#
#
Conflict of Interest
D.W.B. reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from ValeraHealth, equity from Clew, equity from MDClone, personal fees and equity from AESOP, personal fees and equity from FeelBetter, and grants from IBM Watson Health, outside the submitted work. All other authors report no conflict of interest.
Acknowledgments
We would like to acknowledge the multiple research assistants who aided in administering and analyzing the surveys over the years. We would also like to acknowledge the clinicians who took time out of their busy schedules to complete our surveys.
Protection of Human and Animal Subjects
This project was undertaken as a quality improvement initiative at Mass General Brigham and as such was not formally supervised by the Institutional Review Board per their policies.
-
References
- 1 Office of the National Coordinator for Health Information Technology. Adoption of Electronic Health Records by Hospital Service Type 2019–2021, Health IT Quick Stat #60. Washington: U.S.: Department of Health and Human Services; 2022
- 2 Burke HB, Sessums LL, Hoang A. et al. Electronic health records improve clinical note quality. J Am Med Inform Assoc 2015; 22 (01) 199-205
- 3 Cimino JJ. Improving the electronic health record–are clinicians getting what they wished for?. JAMA 2013; 309 (10) 991-992
- 4 Evans RS. Electronic health records: then, now, and in the future. Yearb Med Inform 2016; l (Suppl 1, Suppl 1): S48-S61
- 5 Lawrence JE, Cundall-Curry D, Stewart ME, Fountain DM, Gooding CR. The use of an electronic health record system reduces errors in the National Hip Fracture Database. Age Ageing 2019; 48 (02) 285-290
- 6 Adler-Milstein J, Zhao W, Willard-Grace R, Knox M, Grumbach K. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc 2020; 27 (04) 531-538
- 7 Howe JL, Adams KT, Hettinger AZ, Ratwani RM. Electronic health record usability issues and potential contribution to patient harm. JAMA 2018; 319 (12) 1276-1278
- 8 McGreevey III JD, Mallozzi CP, Perkins RM, Shelov E, Schreiber R. Reducing alert burden in electronic health records: state of the art recommendations from four health systems. Appl Clin Inform 2020; 11 (01) 1-12
- 9 U.S. General Services Administration. System Usability Scale (SUS). Washington: U.S. Government Printing Office; 2006. . Usability.gov
- 10 Melnick ER, Dyrbye LN, Sinsky CA. et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (03) 476-487
- 11 Gomes KM, Ratwani RM. Evaluating improvements and shortcomings in clinician satisfaction with electronic health record usability. JAMA Netw Open 2019; 2 (12) e1916651
- 12 Krousel-Wood M, McCoy AB, Ahia C. et al. Implementing electronic health records (EHRs): health care provider perceptions before and after transition from a local basic EHR to a commercial comprehensive EHR. J Am Med Inform Assoc 2018; 25 (06) 618-626
- 13 Huang C, Koppel R, McGreevey III JD, Craven CK, Schreiber R. Transitions from one electronic health record to another: challenges, pitfalls, and recommendations. Appl Clin Inform 2020; 11 (05) 742-754
- 14 Hanauer DA, Branford GL, Greenberg G. et al. Two-year longitudinal assessment of physicians' perceptions after replacement of a longstanding homegrown electronic health record: does a J-curve of satisfaction really exist?. J Am Med Inform Assoc 2017; 24 (e1): e157-e165
- 15 Ehrlich JR, Michelotti M, Blachley TS. et al. A two-year longitudinal assessment of ophthalmologists' perceptions after implementing an electronic health record system. Appl Clin Inform 2016; 7 (04) 930-945
- 16 Kjeldskov J, Skov MB, Stage J. A longitudinal study of usability in health care: does time heal?. Int J Med Inform 2010; 79 (06) e135-e143
- 17 Tutty MA, Carlasare LE, Lloyd S, Sinsky CA. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc 2019; 26 (07) 673-677
- 18 Gardner RL, Cooper E, Haskell J. et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 26 (02) 106-114
- 19 Williams DC, Warren RW, Ebeling M, Andrews AL, Teufel Ii RJ. Physician use of electronic health records: survey study assessing factors associated with provider reported satisfaction and perceived patient impact. JMIR Med Inform 2019; 7 (02) e10949
- 20 Tajirian T, Stergiopoulos V, Strudwick G. et al. The influence of electronic health record use on physician burnout: cross-sectional survey. J Med Internet Res 2020; 22 (07) e19274
- 21 Yan Q, Jiang Z, Harbin Z, Tolbert PH, Davies MG. Exploring the relationship between electronic health records and provider burnout: a systematic review. J Am Med Inform Assoc 2021; 28 (05) 1009-1021
- 22 Meyerhoefer CD, Sherer SA, Deily ME. et al. Provider and patient satisfaction with the integration of ambulatory and hospital EHR systems. J Am Med Inform Assoc 2018; 25 (08) 1054-1063
- 23 Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009; 42 (02) 377-381
- 24 Harris PA, Taylor R, Minor BL. , et al. ; The REDCap Consortium. Building an international community of software partners. J Biomed Inform 2019; 95: 103208
- 25 Limesurvey Gmb H. . LimeSurvey: An OpenSource survey tool. Hamburg, Germany: LimeSurvey, 2006. http://www.limesurvey.org
- 26 Agency for Healthcare Research and Quality. Primary Care Information Project (PCIP) Post-Electronic Health Record Implementation: Survey of Providers. New York, NY: New York City Department of Health and Mental Hygiene; 2010
- 27 Adler KG, Edsall RL. The 2012 FPM survey of user satisfaction with EHR systems. Fam Pract Manag 2012; 19 (03) 19-20
- 28 Marcilly R, Ammenwerth E, Roehrer E, Niès J, Beuscart-Zéphir MC. Evidence-based usability design principles for medication alerting systems. BMC Med Inform Decis Mak 2018; 18 (01) 69
- 29 Marien S, Legrand D, Ramdoyal R. et al. A user-centered design and usability testing of a web-based medication reconciliation application integrated in an eHealth network. Int J Med Inform 2019; 126: 138-146
- 30 Horsky J, Phansalkar S, Desai A, Bell D, Middleton B. Design of decision support interventions for medication prescribing. Int J Med Inform 2013; 82 (06) 492-503
- 31 Garabedian PM, Wright A, Newbury I. et al. Comparison of a prototype for indications-based prescribing with 2 commercial prescribing systems. JAMA Netw Open 2019; 2 (03) e191514
- 32 Wang L, Blackley SV, Blumenthal KG. et al. A dynamic reaction picklist for improving allergy reaction documentation in the electronic health record. J Am Med Inform Assoc 2020; 27 (06) 917-923
- 33 Nanji KC, Garabedian PM, Langlieb ME. et al. Usability of a perioperative medication-related clinical decision support software application: a randomized controlled trial. J Am Med Inform Assoc 2022; 29 (08) 1416-1424
- 34 Chokshi SK, Belli HM, Troxel AB. et al. Designing for implementation: user-centered development and pilot testing of a behavioral economic-inspired electronic health record clinical decision support module. Pilot Feasibility Stud 2019; 5: 28
- 35 McCoy AB, Russo EM, Johnson KB. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J Am Med Inform Assoc 2022; 29 (06) 1050-1059
- 36 English EF, Holmstrom H, Kwan BW. et al. Virtual sprint outpatient electronic health record training and optimization effect on provider burnout. Appl Clin Inform 2022; 13 (01) 10-18
- 37 Ratwani RM, Sinsky CA, Melnick ER. . Closing the Electronic Health Record Usability Gap. Bill of Health, Harvard Law School (harvard.edu); 2020
- 38 Pierce RP, Eskridge BR, Ross B, Day MA, Dean B, Belden JL. Improving the user experience with discount site-specific user testing. Appl Clin Inform 2022; 13 (05) 1040-1052
- 39 Rizvi RF, Marquard JL, Hultman GM, Adam TJ, Harder KA, Melton GB. Usability evaluation of electronic health record system around clinical notes usage-an ethnographic study. Appl Clin Inform 2017; 8 (04) 1095-1105
- 40 Hettinger AZ, Melnick ER, Ratwani RM. Advancing electronic health record vendor usability maturity: progress and next steps. J Am Med Inform Assoc 2021; 28 (05) 1029-1031
- 41 Kohavi R, Thomke S. The Surprising Power of Online Experiments. Harvard business review 2017; 95 (05) 74-82
- 42 Austrian J, Mendoza F, Szerencsy A. et al. Applying A/B testing to clinical decision support: rapid randomized controlled trials. J Med Internet Res 2021; 23 (04) e16651
- 43 Kohavi R, Tang D, Xu Y, Hemkens LG, Ioannidis JPA. Online randomized controlled experiments at scale: lessons and extensions to medicine. Trials 2020; 21 (01) 150
Address for correspondence
Publikationsverlauf
Eingereicht: 05. Januar 2023
Angenommen: 12. Mai 2023
Artikel online veröffentlicht:
16. August 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Office of the National Coordinator for Health Information Technology. Adoption of Electronic Health Records by Hospital Service Type 2019–2021, Health IT Quick Stat #60. Washington: U.S.: Department of Health and Human Services; 2022
- 2 Burke HB, Sessums LL, Hoang A. et al. Electronic health records improve clinical note quality. J Am Med Inform Assoc 2015; 22 (01) 199-205
- 3 Cimino JJ. Improving the electronic health record–are clinicians getting what they wished for?. JAMA 2013; 309 (10) 991-992
- 4 Evans RS. Electronic health records: then, now, and in the future. Yearb Med Inform 2016; l (Suppl 1, Suppl 1): S48-S61
- 5 Lawrence JE, Cundall-Curry D, Stewart ME, Fountain DM, Gooding CR. The use of an electronic health record system reduces errors in the National Hip Fracture Database. Age Ageing 2019; 48 (02) 285-290
- 6 Adler-Milstein J, Zhao W, Willard-Grace R, Knox M, Grumbach K. Electronic health records and burnout: time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc 2020; 27 (04) 531-538
- 7 Howe JL, Adams KT, Hettinger AZ, Ratwani RM. Electronic health record usability issues and potential contribution to patient harm. JAMA 2018; 319 (12) 1276-1278
- 8 McGreevey III JD, Mallozzi CP, Perkins RM, Shelov E, Schreiber R. Reducing alert burden in electronic health records: state of the art recommendations from four health systems. Appl Clin Inform 2020; 11 (01) 1-12
- 9 U.S. General Services Administration. System Usability Scale (SUS). Washington: U.S. Government Printing Office; 2006. . Usability.gov
- 10 Melnick ER, Dyrbye LN, Sinsky CA. et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (03) 476-487
- 11 Gomes KM, Ratwani RM. Evaluating improvements and shortcomings in clinician satisfaction with electronic health record usability. JAMA Netw Open 2019; 2 (12) e1916651
- 12 Krousel-Wood M, McCoy AB, Ahia C. et al. Implementing electronic health records (EHRs): health care provider perceptions before and after transition from a local basic EHR to a commercial comprehensive EHR. J Am Med Inform Assoc 2018; 25 (06) 618-626
- 13 Huang C, Koppel R, McGreevey III JD, Craven CK, Schreiber R. Transitions from one electronic health record to another: challenges, pitfalls, and recommendations. Appl Clin Inform 2020; 11 (05) 742-754
- 14 Hanauer DA, Branford GL, Greenberg G. et al. Two-year longitudinal assessment of physicians' perceptions after replacement of a longstanding homegrown electronic health record: does a J-curve of satisfaction really exist?. J Am Med Inform Assoc 2017; 24 (e1): e157-e165
- 15 Ehrlich JR, Michelotti M, Blachley TS. et al. A two-year longitudinal assessment of ophthalmologists' perceptions after implementing an electronic health record system. Appl Clin Inform 2016; 7 (04) 930-945
- 16 Kjeldskov J, Skov MB, Stage J. A longitudinal study of usability in health care: does time heal?. Int J Med Inform 2010; 79 (06) e135-e143
- 17 Tutty MA, Carlasare LE, Lloyd S, Sinsky CA. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc 2019; 26 (07) 673-677
- 18 Gardner RL, Cooper E, Haskell J. et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 26 (02) 106-114
- 19 Williams DC, Warren RW, Ebeling M, Andrews AL, Teufel Ii RJ. Physician use of electronic health records: survey study assessing factors associated with provider reported satisfaction and perceived patient impact. JMIR Med Inform 2019; 7 (02) e10949
- 20 Tajirian T, Stergiopoulos V, Strudwick G. et al. The influence of electronic health record use on physician burnout: cross-sectional survey. J Med Internet Res 2020; 22 (07) e19274
- 21 Yan Q, Jiang Z, Harbin Z, Tolbert PH, Davies MG. Exploring the relationship between electronic health records and provider burnout: a systematic review. J Am Med Inform Assoc 2021; 28 (05) 1009-1021
- 22 Meyerhoefer CD, Sherer SA, Deily ME. et al. Provider and patient satisfaction with the integration of ambulatory and hospital EHR systems. J Am Med Inform Assoc 2018; 25 (08) 1054-1063
- 23 Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009; 42 (02) 377-381
- 24 Harris PA, Taylor R, Minor BL. , et al. ; The REDCap Consortium. Building an international community of software partners. J Biomed Inform 2019; 95: 103208
- 25 Limesurvey Gmb H. . LimeSurvey: An OpenSource survey tool. Hamburg, Germany: LimeSurvey, 2006. http://www.limesurvey.org
- 26 Agency for Healthcare Research and Quality. Primary Care Information Project (PCIP) Post-Electronic Health Record Implementation: Survey of Providers. New York, NY: New York City Department of Health and Mental Hygiene; 2010
- 27 Adler KG, Edsall RL. The 2012 FPM survey of user satisfaction with EHR systems. Fam Pract Manag 2012; 19 (03) 19-20
- 28 Marcilly R, Ammenwerth E, Roehrer E, Niès J, Beuscart-Zéphir MC. Evidence-based usability design principles for medication alerting systems. BMC Med Inform Decis Mak 2018; 18 (01) 69
- 29 Marien S, Legrand D, Ramdoyal R. et al. A user-centered design and usability testing of a web-based medication reconciliation application integrated in an eHealth network. Int J Med Inform 2019; 126: 138-146
- 30 Horsky J, Phansalkar S, Desai A, Bell D, Middleton B. Design of decision support interventions for medication prescribing. Int J Med Inform 2013; 82 (06) 492-503
- 31 Garabedian PM, Wright A, Newbury I. et al. Comparison of a prototype for indications-based prescribing with 2 commercial prescribing systems. JAMA Netw Open 2019; 2 (03) e191514
- 32 Wang L, Blackley SV, Blumenthal KG. et al. A dynamic reaction picklist for improving allergy reaction documentation in the electronic health record. J Am Med Inform Assoc 2020; 27 (06) 917-923
- 33 Nanji KC, Garabedian PM, Langlieb ME. et al. Usability of a perioperative medication-related clinical decision support software application: a randomized controlled trial. J Am Med Inform Assoc 2022; 29 (08) 1416-1424
- 34 Chokshi SK, Belli HM, Troxel AB. et al. Designing for implementation: user-centered development and pilot testing of a behavioral economic-inspired electronic health record clinical decision support module. Pilot Feasibility Stud 2019; 5: 28
- 35 McCoy AB, Russo EM, Johnson KB. et al. Clinician collaboration to improve clinical decision support: the Clickbusters initiative. J Am Med Inform Assoc 2022; 29 (06) 1050-1059
- 36 English EF, Holmstrom H, Kwan BW. et al. Virtual sprint outpatient electronic health record training and optimization effect on provider burnout. Appl Clin Inform 2022; 13 (01) 10-18
- 37 Ratwani RM, Sinsky CA, Melnick ER. . Closing the Electronic Health Record Usability Gap. Bill of Health, Harvard Law School (harvard.edu); 2020
- 38 Pierce RP, Eskridge BR, Ross B, Day MA, Dean B, Belden JL. Improving the user experience with discount site-specific user testing. Appl Clin Inform 2022; 13 (05) 1040-1052
- 39 Rizvi RF, Marquard JL, Hultman GM, Adam TJ, Harder KA, Melton GB. Usability evaluation of electronic health record system around clinical notes usage-an ethnographic study. Appl Clin Inform 2017; 8 (04) 1095-1105
- 40 Hettinger AZ, Melnick ER, Ratwani RM. Advancing electronic health record vendor usability maturity: progress and next steps. J Am Med Inform Assoc 2021; 28 (05) 1029-1031
- 41 Kohavi R, Thomke S. The Surprising Power of Online Experiments. Harvard business review 2017; 95 (05) 74-82
- 42 Austrian J, Mendoza F, Szerencsy A. et al. Applying A/B testing to clinical decision support: rapid randomized controlled trials. J Med Internet Res 2021; 23 (04) e16651
- 43 Kohavi R, Tang D, Xu Y, Hemkens LG, Ioannidis JPA. Online randomized controlled experiments at scale: lessons and extensions to medicine. Trials 2020; 21 (01) 150

