CC BY-NC-ND 4.0 · Appl Clin Inform 2023; 14(02): 356-364
DOI: 10.1055/s-0043-1767684
Research Article

Crowdsourcing Electronic Health Record Improvements at Scale across an Integrated Health Care Delivery System

Geetanjali Rajamani
1   Medical School, University of Minnesota, Minneapolis, Minnesota, United States
2   Center for Learning Health Systems Sciences, University of Minnesota, Minneapolis, Minnesota, United States
,
Molly Diethelm
2   Center for Learning Health Systems Sciences, University of Minnesota, Minneapolis, Minnesota, United States
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
,
Melissa A. Gunderson
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
4   Department of Surgery, University of Minnesota, Minneapolis, Minnesota, United States
,
Venkata S. M. Talluri
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
,
Patricia Motz
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
,
Jennifer M. Steinhaus
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
,
Anne E. LaFlamme
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
6   School of Nursing, University of Minnesota, Minneapolis, Minnesota, United States
,
Bryan Jarabek
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
,
Tori Christiaansen
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
,
Jeffrey T. Blade
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
,
Sameer Badlani
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
,
Genevieve B. Melton
2   Center for Learning Health Systems Sciences, University of Minnesota, Minneapolis, Minnesota, United States
3   Institute for Health Informatics, University of Minnesota, Minneapolis, Minnesota, United States
4   Department of Surgery, University of Minnesota, Minneapolis, Minnesota, United States
5   Information Technology, Fairview Health Services, Minneapolis, Minnesota, United States
› Author Affiliations
 

Abstract

Background and Objective Despite widespread adoption of electronic health records (EHRs), these systems have significant room for improved efficiency and efficacy. While the idea of crowdsourcing EHR improvement ideas has been reported, little is known about how this might work across an integrated health care delivery system in practice.

Methods Our program solicited EHR improvement submissions during two timeframes across 10 hospitals and 60 clinics in an upper-Midwest integrated health care delivery system. Submissions were primarily collected via an EHR help feature.

Results A total of 262 and 294 submissions were received in 2019 and 2022, with a majority initiated from physicians (73.5 and 46.9%, 2019 and 2022) specializing in family medicine (52.0 and 59.3%). In 2022, the program reached a larger variety of personnel than 2019, with 53.0% of submissions from advanced practice providers, nurses, administrative staff, and other roles (p < 0.0001). Many ideas (36.4 and 50.0% in 2019 and 2022) reflected a lack of user understanding of EHR features and were addressed through training/education. Significant (27.1 and 25.9%) or simple (24.0 and 14.7%) EHR optimizations were required to address most remaining suggestions, with a number part of planned EHR improvement projects already (16.3 and 17.6%).

Conclusion Our experience using a crowdsourcing approach for EHR improvement ideas provided clinicians and staff the opportunity to address frustrations with the EHR and offered concrete feedback and solutions. While previous studies have suggested EHR technology improvements as paramount, we observed large numbers of users having a misunderstanding of EHR features, highlighting the need for improved EHR user competency and training.


#

Background and Significance

While electronic health records (EHRs) are nearly ubiquitous in health care (89.9% of office-based physicians report access to EHRs and 72.3% report use of a certified EHR in 2019[1]) and have undeniably improved certain aspects of care, EHRs are a cause of clinician burnout and discontent. Examples of EHR benefits include improved diagnostic practices, instances of reduced medical errors, enhanced data accessibility, and increased patient engagement.[2] [3] In contrast, EHRs have also been associated with creation of new errors, provider and clinician discontent, information overload, difficulty navigating the system, and excessive data entry.[4] [5] As a result, strategies to improve the content, usage, and functionality of EHRs are of the utmost importance.

Several approaches to address and improve the EHR have been described, such as those by Shah et al who propose improved user training, rerouting of patient messages to other clinical team members, and team-based workflows (e.g., increased scribes and other supportive office staff).[6] Similarly, Haskell et al showed that use of voice-based dictation software helped reduce burnout among a group of Rhode Island physicians.[7] McCoy et al launched the “Clickbusters” initiative, which aimed to reduce burnout by strategically decreasing the number of clinical decision support alerts by more than 15%.[8] A group of diverse stakeholders, including academic informaticians, health information technology (IT) vendors, regulators, informatics societies, and others are also tackling a range of approaches to improve the process of medical documentation at a national scale through the 25 by 5 initiative.[9]

In 2018, an important perspective piece entitled “Getting Rid of Stupid Stuff” or “GROSS” shared the approach of directly asking providers and nurses for suggestions of how to improve or change the EHR.[10] They reported three high-level themes from their experience in EHR functionality: documentation that was never meant to occur, documentation to be made more efficient, and required documentation that was confusing.[10] This study found many examples of unnecessary documentation in the EHR using precious clinician time. Following this, “Beyond-GROSS” outlines an approach for implementing a similar program at Mount Sinai to obtain feedback from physicians in the Gastroenterology Department about general workflow and process issues, in addition to EHR-specific issues.[11] Other studies have used a similar crowdsourcing methodology for issues other than EHR improvement, such as validating an optical mark recognition system for coronavirus disease 2019 (COVID-19) data extraction[12] and validating a knowledge base of problem–medication pairs.[13]

Inspired by these projects, our team launched an initiative, internally branded as “Joy in Practice,” across an integrated health care delivery system. The goals of our program are similar to the “GROSS” program and planned “Beyond-GROSS” project in which EHR and workflow process improvement suggestions are collected from clinic and hospital personnel and acted upon with the goal of improving user experience and ultimately overall job experience. In contrast to previous reports, however, we report our experience across two separate rounds of idea solicitation within an integrated health care delivery system including a full spectrum of personnel and practice settings. This report describes our experience to date with the program and associated learnings.


#

Methods

Setting, Objective, and Hypotheses of Crowdsourcing Initiative

This study took place at M Health Fairview, an academic health system partnership between Fairview Health Services, University of Minnesota Physicians, and University of Minnesota that includes 10 hospitals, 60 primary care clinics, over 100 specialties, and more than 34,000 employees as of 2019.[14] [15] M Health Fairview has an enterprise EHR (Epic Systems, Verona, Wisconsin, United States).[16]

Our crowdsourcing program was deployed at two different timeframes: April to July 2019 and February to March 2022. The program stopped accepting submissions once 250 to 300 submissions were reached, due to the manual labor required in analyzing and acting upon all submissions, as described. This was decided after the 2019 iteration and applied to the 2022 iteration. Thus, the 2022 timeframe was slightly shorter than 2019 as more submissions were gathered in a shorter period of time. While originally there were plans for this program to take place at least annually, the second launch was delayed until 2022 due to the COVID-19 pandemic. Because of the time between deployments, the program in 2019 compared with 2022 was likely heterogeneous in nature with differences across the time period and pandemic.

This initiative was advertised to M Health Fairview personnel widely (both employed and affiliated individuals) via internal systemwide corporate communications and news channels such as a primary care newsletter. While the entire M Health Fairview digital and information services team supported this program, a team of clinical informaticists focused on design and optimization of health IT was tasked with tracking submissions, determining feasibility, and designing solutions in collaboration with others (“core team”). The core team worked directly with medical informatics partners (e.g., Chief Medical Information Officers, Associate Chief Medical Information Officers, Informatics Medical Directors, and Clinical Informatics Fellows), and informatics health IT training and support, health information management, and EHR application team members were consulted frequently with clinical operations to clarify submissions and assist with reviews.

The program was sponsored by the Chief Wellness Officer and Chief Digital Officer and had the following overarching goals:

  • Increase clinician satisfaction by providing a mechanism to report issues and ideas for improvement in the EHR that would save time and effort, and potentially increase job satisfaction.

  • Ensure clear and timely communication around each suggestion with each user to ensure EHR users felt heard.

  • Analyze submissions to determine submission “hot spots” and learnings from the program.

The hypotheses in launching this program were several-fold. First, we expected to see increased engagement (measured as number of submissions) with each successive round of the program. Second, we expected that this program might give us a better insight into the nature of common issues staff have with the EHR (“hot spots”). We predicted that many issues might be due to lack of understanding of existing EHR features.

The Institutional Review Board determined this study as Not Human Subjects Research.


#

Data Collection and Analysis

For 2019 data collection, submissions could be submitted in one of three ways: (1) via a help function in the EHR (Epic's “HelpDesk” button), (2) via a digital and information services team services help ticket (“ServiceNow” ticket), or (3) via direct email to coordinators on the core team administering the program. Because users found the HelpDesk workflow efficient and responsive “in the moment” and within their clinical EHR workflow, submissions were only accepted via the HelpDesk feature for the February 2022 program initiative. Note that the HelpDesk feature automatically attached a screenshot of the issue with each submission, and thus, all 2022 submissions included screenshots that were stored on a restricted HIPAA compliant Microsoft Sharepoint site; this was extremely beneficial to the core team in understanding the nature of each submission.

All submissions were manually collated and tracked and included information about the submitter, when the submission was received, all email communications sent regarding the submission, and method of resolution. Additionally, for every submission received, a detailed email was sent to the submitter outlining what was being done to address their request, or why the request was not feasible for those suggestions that were rejected. The cost of this initiative was encompassed as part of the core team's informatics responsibilities.

All individuals submitting an item were categorized by role, as follows: physician, advanced practice provider (APP), registered nurse (RN), administrative staff, ancillary staff, pharmacist, chaplain, other, and unknown. Physicians and APPs were further categorized by area of specialty. In contrast, RNs were categorized by setting of service (i.e., inpatient, outpatient, other). If a submission was missing information, additional follow-up was obtained to help make the submission complete, and the nature of these interactions was tracked both with submitters and within digital and information services. Submissions were then categorized, as follows: (1) already being addressed by another project in M Health Fairview; (2) already included in an upcoming system update; (3) feasible but requiring prioritization from the submitter's department; (4) not possible (due to policy restrictions, lack of response from user to follow-up questions, unavailability of needed tools, hard coding in the EHR); or (5) feasible.

Each submission that was feasible to address was broken down by avenue of resolution. The categories for avenue of resolution were inspired by the work of Ashton[10] and Otokiti et al[11] as well as several additional original categories as summarized in [Table 1]. Note that the program was designed to provide all necessary solutions (e.g., training, quick fixes, etc.), but the solutions were formally categorized as seen in [Table 1] upon the program's completion. Finally, we classified the end result of each submission. Submissions were classified as “Complete” if they were addressed (internally, or by another project/update/department, or any resolution needed was completed such as training administered, build created, etc.). Submissions were “Rejected” if deemed not feasible or more information could not be gathered. Submissions were categorized as “Ready for Build” if the work was approved and planned and the remaining step was to complete the build in the EHR, and “In Process” if more information was being gathered regarding how to complete the submission or approval was not yet obtained.

Table 1

Sources for categorization of avenue of resolution of submissions

Resolution category

Source

Training/education: requested feature exists, but more training is needed to educate clinicians on using it

Ashton and Otokiti

Significant new build needed: requires a new moderate to large EHR build

Otokiti

Quick build fix/efficiency fix: add a quick button, make an existing tool more efficient, etc.

Ashton and Otokiti

Isolated incident: issue is limited to submitter, submit an incident ticket

New

Non-EHR issue/bigger than EHR: more so an issue with the workflow process but could include an EHR component as well

Otokiti

Elimination issue: feature that is redundant/unnecessary

Ashton and Otokiti

Need for new EHR functionality: contact vendor

New

Abbreviation: EHR, electronic health record.



#
#

Results

Submission Content and Submitters

Over the two time periods, there were a total of 556 submissions (262 and 294: 2019 and 2022, respectively), of which 543 were included in the final analysis, with 13 submissions in 2019 excluded due to lack of documentation regarding follow-up ([Table 2]). Approximately 18 submissions in 2022 centered around the same issue, namely, a recently updated obstetrician/gynecologist (OB/Gyn) flowsheet that was confusing to many users. While these submissions were addressed by the core team as a group, they were counted in analyses as distinct submissions since they were submitted from different personnel and sometimes included other issues in the submission, as well.

Table 2

Submissions by submitter role, provider specialty, and registered nurse practice setting

Submissions and Submitters

2019

N (%)

2022

N (%)

Overall

N (%)

Total submissions

262 (100.0)

294 (100.0)

556 (100.0)

Submissions with documentation

249 (95.0)

294 (100.0)

543 (97.7)

Submitter role

 Physician

183 (73.5)

138 (46.9)

321 (59.1)

 Advanced practice provider (APP)[a]

19 (7.6)

7 (2.4)

26 (4.8)

 Registered nurse (RN)

40 (16.1)

93 (31.6)

133 (24.5)

 Administrative staff[b]

3 (1.2)

21 (7.1)

24 (4.4)

 Ancillary staff[c]

1 (0.4)

14 (4.8)

15 (2.8)

 Pharmacist

0 (0.0)

12 (4.1)

12 (2.2)

 Other[d]

0 (0.0)

6 (2.0)

6 (1.1)

 Chaplain

0 (0.0)

2 (0.7)

2 (0.4)

 Unknown

3 (1.2)

1 (0.3)

4 (0.7)

 Total

249 (100.0)

294 (100.0)

543 (100.0)

p-Value[e]

<0.0001

Physician/APP specialty

 Family medicine

105 (52.0)

86 (59.3)

191 (55.0)

 Internal medicine

30 (14.9)

10 (6.9)

40 (11.5)

 Internal medicine subspecialty[f]

17 (8.4)

10 (6.9)

27 (7.8)

 OB/Gyn

13 (6.4)

19 (13.1)

32 (9.2)

 Pediatrics

10 (5.0)

2 (1.4)

12 (3.5)

 Med/Peds

3 (1.5)

15 (10.3)

18 (5.2)

 Emergency medicine

6 (3.0)

0 (0.0)

6 (1.7)

 Neurology/psychiatry

5 (2.5)

0 (0.0)

5 (1.4)

 Dermatology

4 (2.0)

0 (0.0)

4 (1.2)

 Otolaryngology

0 (0.0)

1 (0.7)

1 (0.3)

 Surgery

1 (0.5)

2 (1.4)

3 (0.9)

 Other

2 (1.0)

0 (0.0)

2 (0.6)

 Unknown

6 (3.0)

0 (0.0)

6 (1.7)

 Total

202 (100.0)

145 (100.0)

347 (100.0)

p-Value

<0.0001

RN practice setting

 Inpatient

31 (77.5)

72 (77.4)

103 (77.4)

 Outpatient

2 (5.0)

8 (8.6)

10 (7.5)

 Unknown

7 (17.5)

13 (4.0)

20 (15.0)

 Total

40 (100.0)

93 (100.0)

133 (100.0)

p-Value

0.8

Abbreviation: OB/Gyn, obstetrician/gynecologist.


a APP includes physician assistants and nurse practitioners.


b Administrative staff includes front desk staff, registrar/billing, scheduling, and service line managers.


c Ancillary staff includes radiologic technologist, laboratory staff, optometry technicians, medical assistants, nursing assistants, and licensed practical nurses.


d Other category includes social workers, physical therapists, and clinical psychologists.


e p-value calculated using Fisher's exact test (due to some small cell sizes) comparing 2019 versus 2022.


f Internal medicine subspecialty includes hematology/oncology, cardiology, infectious disease, gastroenterology, and palliative care.


Physicians were the largest category of submitters, at 73.5 and 46.9% in 2019 and 2022, respectively. We observed submissions from a greater percentage of APPs, nurses, administrative staff, ancillary staff, pharmacists, and people classified as “other” in 2022. Specifically, 53.0% of submitters were nonphysicians in 2022 as compared with only 26.5% in 2019 (p < 0.0001).

[Table 2] also shows the specialty breakdown of physicians and APPs. During both time periods, most submissions came from family medicine (55.0% over both years). In 2019, more internal medicine subspecialties entered submissions as compared with 2022 (8.9 vs. 6.9%, p < 0.0001), with 2022 having a higher percentage of OB/Gyn providers (6.4% in 2019 vs. 13.1% in 2022, p < 0.0001) and Med/Peds providers (1.5% in 2019 vs. 10.3% in 2022, p < 0.0001). During both time periods, the majority of nursing submissions came from the inpatient setting.

Greater than 90% of submissions in both 2019 and 2022 had no missing information and were able to be further classified in terms of feasibility ([Table 3]). Over 50% (55.4% in 2019 and 51.1% in 2022) of submissions were considered feasible and addressed by the core team. About a quarter in both years were not possible and ultimately rejected, whereas the remaining quarter were either already part of upcoming projects/system upgrades or sent back to the departments for prioritization compared with other potential initiatives. For example, a submission to update the pneumonia vaccine guidance information in wellness visits had already been incorporated into an upcoming system upgrade; a submission requesting to automatically assign the PHQ9 questionnaire to patients with depression on their problem list was already being addressed in a system-wide project.

Table 3

Vetting, feasibility, resolution, and end result of submissions

Vetting, Feasibility, Resolution, and End Result of Submissions

2019

N (%)

2022

N (%)

Overall

N (%)

Initial assessment of submission

 No missing information

233 (93.6)

278 (94.6)

511 (94.1)

 Further discussion with informatics and IT teams regarding feasibility needed

11 (4.4)

10 (3.4)

21 (3.9)

 More details required from submitter

5 (2.0)

6 (2.0)

11 (2.0)

 Total

249 (100.0)

294 (100.0)

543 (100.00)

p-Value[a]

0.9

Initial feasibility of submissions with complete information

 Submission feasible and addressed

82 (35.2)

71 (25.5)

153 (29.9)

 Submission feasible and addressed via training

47 (20.2)

71 (25.5)

118 (23.1)

 Submission not possible

58 (24.9)

62 (22.3)

120 (23.5)

 Already included in another project

32 (13.7)

34 (12.2)

66 (12.9)

 Submission feasible but moderate to large effort required, back to department for prioritization

8 (3.4)

25 (9.0)

33 (6.5)

 Already included in upcoming EHR system upgrade

6 (2.6)

15 (5.4)

21 (4.1)

 Total

233 (100.0)

278 (100.0)

511 (100.0)

p-Value

0.05

Category of resolution of feasible submissions

 Training/education

47 (36.4)

71 (50.0)

118 (43.5)

 Significant new build needed

35 (27.1)

37 (25.9)

72 (26.6)

 Quick build fix/efficiency fix

31 (24.0)

21 (14.7)

52 (19.2)

 Isolated incident

5 (4.1)

11 (7.7)

16 (5.9)

 Non-EHR issue/bigger than EHR

6 (4.9)

0 (0.0)

6 (2.2)

 Elimination issue

3 (2.4)

0 (0.0)

3 (1.1)

 Need for new EHR functionality (which was added)

2 (1.6)

2 (1.4)

4 (1.5)

 Total

129 (100.0)

142 (100.0)

271 (100.0)

p-Value

<0.01

End result of submissions

 Completed

173 (69.5)

157 (53.4)

330 (60.8)

 Rejected

62 (24.9)

65 (22.1)

127 (23.4)

 Ready for build

0 (0.0)

60 (20.4)

60 (11.1)

 In process

14 (5.6)

12 (4.1)

26 (4.8)

 Total

249 (100.0)

294 (100.0)

543 (100.0)

p-Value

<0.0001

Abbreviations: EHR, electronic health record; IT, information and technology.


a p-Value calculated using Fisher's exact test (due to some small cell sizes) comparing 2019 versus 2022.


[Table 3] also summarizes how submissions that were feasible were ultimately handled. The largest category of method of resolution was training and education (36.4 and 50.0%, 2019 and 2022, p < 0.01), implying that most perceived issues with the EHR are actually due to users not understanding its full functionality. [Table 4] illustrates examples of submissions in each category, including training (e.g., submitters requesting a way to eliminate/hide old documents cluttering their view, increase font size and other visual accessibility tools, and a different format for messages) where methods to address these issues already existed.

Table 4

Examples of submissions falling into each category of resolution

Resolution category

Example

Training/education

 •“Remove old documents: Lots of old document clutter; too much to scroll through to find general content insurance and DL.”

 •“Increase accessibility in Epic: I am a visually impaired user of Epic, and I would really like to see increased accessibility options within the program. Options for a large print, higher contrast themes, and true “dark mode” without the majority of the screen being white. I think this could be a really big opportunity to create more inclusion for people with disabilities within health care.”

 •“In basket messaging: It is hard to track a thread of messages and the flow of conversation in the current format. I think it should be reformatted to either look like gmail or text messaging format so it is easier to follow.”

Significant new build needed

 •“When we receive orders to schedule, they only come with the start date that the order is active, not the expected date that the provider fills in. We then have to go to the nursing team and ask them to pull up the order to look at what the expected date the provider put in. If this could be included on the inbasket message for order information that would be super helpful and reduce waste.”

 •“Need to add lead level to supplemental labs for first OB visit. Minnesota recommends screening all pregnant patients for lead poisoning and ordering the lab depending on results. I'm having to add this on frequently. If you add this, PLEASE default it to a blood draw so we don't get stopped to select between capillary and blood.”

Quick build fix/efficiency fix

 •“Please consider adding the different denominations of Judaism to the patient religion options! It would save chaplains clicks when trying to get more information on Jewish patient's backgrounds!”

 •“The type of ibuprofen that is in the smart set is not what we carry in our clinic. We carry tablets and the smart set only has capsules.”

Isolated incident

 •“Ativan defaults to IM instead of IV as it visually indicates.”

 •“Pain Management choices: Please add nitrous oxide back in choices under pain management interventions.”

Non-EHR issue/bigger than EHR

 •“Scribe to provider communication challenges”

 •Getting duplicate media/scans of the same document

Elimination issue

 •“Care plan for the elderly offers infant nutrition options under CVA—getting rid of this is decluttering!”

 •“Mech ventilation order has inappropriate Qs—‘can they go to tests unmonitored?’ This doesn't make sense in a sedated/intubated pt.”

Need for new EHR function

 •“Expected discharge notes: Under the expected discharge window, there is currently a comment box, it would be helpful to also have a separate box for expected disposition and expected transport as their own text fill in boxes. These could also be connected to the discharge page in the chart.”

 •“Test results for nursing home patients seen by Fairview docs have to be scanned into the chart even though lab is done by Fairview.”

The next largest category of method of resolution, representing about a quarter of submissions in both 2019 and 2022, was “significant new build needed.” These included medium to large changes in the EHR to improve functionality and ease of use. For example, the Minnesota Department of Health recommends lead screening for pregnant patients. Before this initiative, providers had to manually add that order, as it did not show up in supplemental laboratories. A new build was completed to address this. Quick build fixes, changes that could be made easily to the EHR, comprised 24.0% of submission resolutions in 2019 and 14.7% in 2022 (p < 0.01). Examples include adding multiple denominations of Judaism and types of ibuprofen. Less than 10% of submissions were isolated incidents, non-EHR issues, elimination issues, or requiring a new Epic functionality. We even received one suggestion about the initiative itself, stating “the Joy in Practice button is too small”! Ultimately, 69.5% of submissions were completed in 2019 and 53.4% in 2022 to date, with over 70% of 2022 submissions ultimately projected to be complete.


#

Time and Clicks Saved

It is difficult to fully measure the impact of this program, given the large variety of personnel for whom the EHR changes impact, departments, and types of issues. However, our team calculated time and clicks saved for several illustrative submissions (specifically, significant new build, quick fix, and elimination submissions with calculations based on metrics from use of the tool in the previous year and the direct modification), as follows:

  • “Adult Patient Care Summary (PCS) flowsheet ‘additional’ seems to have no purpose.” Investigating this resulted in finding several flowsheets that are not required and unnecessary. Eliminating these is estimated to save 20,000 clicks per year.

  • “Physical therapy (PT) consult orders have unnecessary questions for a provider who just wants eval/treat.” The core team optimized the EHR build to address this, in which three PT/OT orders were defaulted to “Eval and Treat.” Since these orders are placed more than 150,000 times per year, and now 3 clicks are saved per order, this change results in 450,000 clicks per year saved.

  • “Sore throat and fever (used to place initial labs like strep test) are not on the ambulatory preference list [and require] going to the database.” Addressing this submission saved over 120,000 clicks per year by eliminating the need for ambulatory providers and clinical staff to search the database to select very common diagnoses.

  • “The Hospital follow-up note selection would benefit from a nonexclusive smartlist.” This change resulted in 100,000 clicks per year saved.

  • “[the] Well Child Check (WCC) Milestones Smartlist is confusing.” Addressing this resulted in a higher use of the associated EHR tool and resulted in it being easier to provide the right care to these patients (as recommended pediatric care varies by age and is very detailed, effective tools are very important).

These changes have not only resulted in time and clicks saved, but reduced documentation burden and improved EHR usability. Ultimately, these changes improve patient care, for example, improving the WCC Smartlist ensures that children receive the right preventative care at the right time, which has direct benefit to patient quality and outcomes.


#
#

Discussion

We report on a longitudinal program at M Health Fairview geared toward clinician engagement across an integrated health care delivery system over the program's first two iterations. Our experience showcases increasing engagement of end users, as well as important learnings about EHR usage, including the need for improved training and user competencies around the EHR, which appears to be an important mechanism to reduce frustrations. Both these learnings support our initial hypotheses.

Several studies have designed and tested novel methods of EHR training. At Kaiser Permanente, a 3-day intensive EHR education intervention was tested, including features such as individual coaching and hands-on practice.[17] This resulted in 85 to 98% of physicians reporting improved documentation, efficiency, and approximately 5 minutes of time saved per hour.[17] Other methods of training have also been proposed, such as high-fidelity EHR simulations[18] and Sprint EHR training,[19] a training method developed by the University of Colorado that involves a 2-week, role-specific program combining visual, auditory, and kinesthetic learning approaches. The Sprint program resulted in approximately 20 minutes of clinician time saved per day.[19] While somewhat different from this, our health care system has implemented a retraining program “Accelerate and Control Epic” to help providers make effective and efficient use of the EHR, which is currently being extended to other clinical roles. This program includes a combination of efficiency sessions (available online) and customized one on one sessions with a clinical informatics staff based on EHR use metrics (e.g., Epic Signal data, use of common tools like note templates or ordersets) and specific requested areas of focus.

With respect to EHR training during formal medical training programs, at least one study has demonstrated inadequate EHR training for pregraduate and graduate medical trainees (e.g., medical students and residents) resulting in a lack of familiarity with the EHR system and suboptimal use of time during residency.[20] Furthermore, one study of medical students demonstrated that most spent approximately 70% of their time on the EHR reviewing laboratories, notes, and orders versus 11% of time writing notes.[21] Perhaps this lack of exposure to hands-on usage of the EHR in early stages of training contributes to suboptimal EHR usage down the road. Also potentially contributing to some of the challenges for medical trainees include rotations among different institutions and EHRs, as well as ongoing change in EHRs.

As a next step for this initiative, a third round is in process with the goal of making this an ongoing annual or semiannual initiative. To reduce the manual labor that was used in the previous two rounds, the submission process has been fully automated end-to-end into ServiceNow ensuring submissions are recorded in real time. Furthermore, the core team is working on creating a dashboard to seamlessly view all submissions, assign reviewers, and interface with various application build teams. Ideally, these changes will decrease workload for the core team and enable this initiative to be conducted with increased regularity. Additionally, an end user survey will be administered to all submitters in an effort to better estimate the impact of this program, as well as additional analyses around sites and roles to help target additional improvements and engagement tactics in improving the use of the EHR. The survey will inquire about time saved for users as a result of their submissions being addressed, satisfaction with the process and end result of submissions, and other qualitative feedback regarding this program. A follow-up analysis will also examine the impact of this program and others (e.g., documentation tool changes, a user proficiency program) on efficiency metrics (e.g., Epic Signal end user proficiency data).

We also observed that to identify the unnecessary features of the EHR—or conversely, the features that are lacking and required additional optimization—there is no one better to ask than the end users themselves. For example, the large number of duplicate submissions regarding a recently updated Ob/Gyn flowsheet demonstrated a lack of communication with staff regarding updated workflows and the confusion this can cause. This initiative also demonstrates the importance of understanding EHR tools from the end user perspective and demonstrates how the “small things” can make a significant impact on end users. Organizationally, bridging the gap between technology teams and users helps build trust when suggestions are heard and submissions addressed.

A major strength of the program was its engagement across the organization and with a diversity of stakeholder users. The program reached a wide range of personnel, especially in the second round of submissions, including providers, nurses, administrative staff, ancillary staff, pharmacists, and more. This allowed the team to understand EHR issues beyond provider and nurse perspectives. Additionally, the core team was meticulous in personally corresponding with each submitter via email to inform them of their submission's receipt, progress, and completion. Ideally, this made submitters feel heard and valued throughout the process, encouraging ongoing engagement.

Our results are limited by the fact that our follow-up ended at certain points, making the full impact of the program difficult to entirely characterize. For example, we do not have full follow up on submissions that were better handled by departments or already included in other projects/upgrades. If a submission was better handled and prioritized by a particular operational department (e.g., a medium to large submission that would be vetted by a department IT governance group), the onus was on the submitter to bring that issue up through their department's specific prioritization processes. It is possible that some submitters did not follow through with this. Additionally, for submissions that were part of other organizational projects, submitters may have to wait for several months to hear back from project teams regarding their submission. Finally, there was potential for selection bias in this sample. While efforts were made to make submission straightforward, participants who took the initiative to enter submissions may represent a more engaged subset of EHR users (e.g., some submitters sent numerous submissions and were very involved in the process).


#

Conclusion

Using a crowdsourcing method for engaging end users, our longitudinal program was an effective means to reach a broad range of EHR users across an integrated health care delivery system and identify a variety of ways to improve the efficacy and efficiency of the EHR. We observed the largest issue with optimal EHR use is a gap in training/education, a finding that is supported by the literature. Many submissions were also simple fixes—and ultimately led to streamlining processes, removing the number of clicks it takes to complete necessary tasks and documentation, and improving patient care.


#

Clinical Relevance Statement

This study demonstrates that a variety of clinical staff—from physicians to nurses to administrative and ancillary staff—experience issues with optimal EHR usage due to lack of adequate training. Hospital systems can leverage this information to improve EHR usage training, and thus help clinicians by reducing burnout and documentation burden and returning joy to their practice of medicine. Hospital systems can also utilize this unique crowdsourcing method to engage staff in EHR improvement programs.


#

Multiple-Choice Questions

  1. What were the top two methods of resolution for addressing feasible EHR improvement submissions?

    • Training/education and significant new build

    • Training/education and quick build fix

    • Significant new build and elimination issue

    • Significant new build and quick build fix

    Correct Answer: The correct answer is option a. The top two methods of resolution for feasible submissions were training/education and significant new build. Training/education comprised a majority of submissions, with 50.0% of submissions in 2022 resolved through user training and 43.5% across both years. Significant new build was the second largest category, comprising 26.6% of submissions across both years. [Table 4] shows examples of submissions falling under both categories.

  2. Which role and corresponding area of practice sent in the most submissions across both years?

    • Physicians and internal medicine

    • Nurses and inpatient setting

    • Physicians and family medicine

    • Nurses and outpatient setting

    Correct Answer: The correct answer is option c. Physicians from family medicine represented the most submissions across both years—specifically, 59.1% of submitters were physicians and 55.0% of physicians self-identified as family medicine practitioners (across both years). While a majority of nurses did come from the inpatient setting (77.4% across both years), nurses only represented the third largest category of submitters at 24.5% across both years.


#
#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

The Institutional Review Board at the University of Minnesota determined this study as Not Human Research.



Address for correspondence

Genevieve B. Melton, MD, PhD
Mayo Mail Code 450, 420 Delaware Street Southeast, Minneapolis, MN 55455
United States   

Publication History

Received: 01 January 2023

Accepted: 22 February 2023

Article published online:
10 May 2023

© 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany