Appl Clin Inform 2024; 15(02): 212-219
DOI: 10.1055/s-0044-1782228
Research Article

Explaining Variability in Electronic Health Record Effort in Primary Care Ambulatory Encounters

J. Marc Overhage
1   The Overhage Group, Zionsville, Indiana, United States
,
Fares Qeadan
2   Department of Public Health Sciences, Loyola University Chicago, Chicago, Illinois, United States
,
Eun Ho Eunice Choi
3   University of New Mexico School of Medicine, Albuquerque, New Mexico, United States
,
Duncan Vos
4   Division of Epidemiology and Biostatistics, Department of Biomedical Sciences, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, Michigan, United States
,
5   Department of Biomedical Informatics, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, Michigan, United States
› Author Affiliations
 

Abstract

Background Electronic health record (EHR) user interface event logs are fast providing another perspective on the value and efficiency EHR technology brings to health care. Analysis of these detailed usage data has demonstrated their potential to identify EHR and clinical process design factors related to user efficiency, satisfaction, and burnout.

Objective This study aimed to analyze the event log data across 26 different health systems to determine the variability of use of a single vendor's EHR based on four event log metrics, at the individual, practice group, and health system levels.

Methods We obtained de-identified event log data recorded from June 1, 2018, to May 31, 2019, from 26 health systems' primary care physicians. We estimated the variability in total Active EHR Time, Documentation Time, Chart Review Time, and Ordering Time across health systems, practice groups, and individual physicians.

Results In total, 5,444 physicians (Family Medicine: 3,042 and Internal Medicine: 2,422) provided care in a total of 2,285 different practices nested in 26 health systems. Health systems explain 1.29, 3.55, 3.45, and 3.30% of the total variability in Active Time, Documentation Time, Chart Review Time, and Ordering Time, respectively. Practice-level variability was estimated to be 7.96, 13.52, 8.39, and 5.57%, respectively, and individual physicians explained the largest proportion of the variability for those same outcomes 17.09, 27.49, 17.51, and 19.75%, respectively.

Conclusion The most variable physician EHR usage patterns occurs at the individual physician level and decreases as you move up to the practice and health system levels. This suggests that interventions to improve individual users' EHR usage efficiency may have the most potential impact compared with those directed at health system or practice levels.


#

Background and Significance

With the widespread adoption of electronic health records (EHRs), data from these systems have rapidly become a significant source of insight for health services researchers studying health care systems' structure, processes, and outcomes.[1] In addition to the large corpus of clinical data that EHRs accumulate, EHR systems also capture detailed data about users' interactions with the system. These data represent user actions such as writing progress notes, prescriptions, and laboratory orders along with their associated timings. Compared with clinical data, which are patient- and encounter-centric, these data are user-centric, typically bound by log-in and log-out events, and are affected by myriad of user, system, and environmental factors.

These data are extremely large given the myriad of user actions that occur for each system event (e.g., writing a prescription, looking up a laboratory result, or ordering a test). Vendor-specific heuristics programs analyze these data to make sense out of the multitude of these user-centered EHR events. The output of the heuristics programs create event logs that contain vendor-standardized EHR user actions and often associated timings.[2] Analysis of event logs can compare the variability of events among users, practice groups, and health systems, with regard to the timing, efficiency, and appropriateness of the interactions with the EHR's user interface.[3] Analyzing variability of user events among these different practice groups can provide clues to where there is opportunity to improve efficiency, provide more effective user training, improve system design, more supportive user environments, etc.[4] [5] Indeed, reducing business process variability is increasingly shown to be an effective strategy to improve quality and reduce costs across many industries.[6] [7] One way this strategy has manifested in the health care domain is with the development of clinical pathways that attempt to reduce the variability in how patients in well-defined categories are cared for in a standardized and evidence-based manner.[8] [9] [10]

Historically, elucidating clinician workflow efficiency required time- and resource-intensive data collection methods, such as time–motion studies, surveys, or interviews.[11] [12] These approaches were often labor-intensive, limited in size, and not easily scalable to comparisons between large user groups such as at the practice and institutional level.[13] [14] [15] The analyses of event logs on a large scale is now possible and despite its limitations, is yet another approach to analyze user efficiency at both the individual and larger scales.

Like in the business domain, there are many other areas in health care systems where improving process standardization and reducing process variability has shown improved quality and/or reduced costs.[16] [17] [18] [19] [20] Although event logs are perhaps best suited for identification of patterns, time estimations of clinical events can provide insight into EHR tasks timing issues. Studies that have measured EHR time use find significant variability (high standard deviations) in the length of time physicians spent using the EHR.[21] [22] [23] [24] [25] [26] [27] User event log data can also support efforts to study and improve the work life of those who provide health care by targeting EHR tasks that are burdensome and potentially associated with clinician burnout.[28] [29] [30] Although physicians often cite the user experience as a primary source of frustration; these other factors play significant roles. Recognizing these key influences can help create a deeper understanding of the challenges in developing a better EHR user experience.[31]

In addition to EHR design, there are myriad of other external factors that also impact EHR usability (see [Fig. 1]). These include EHR design, patient venue, user specialty/role, patient acuity/complexity, staff support models, server hardware and networking environment, and institution size.[15] Tutty et al provide a framework for analyzing the factors contributing to the EHR user experience and likely, variability in EHR use.[31]

Zoom Image
Fig. 1 Factors potentially impacting a physician's EHR use time. EHR, electronic health record; HIPAA, Health Insurance Portability and Accountability Act.

A few studies have evaluated how these factors contribute to physician EHR time use. Longhurst et al used a survey of EHR experience scores to measure the contributions to variability and found that user factors accounted for 50.6% of the variability and the EHR software (19.8%), organization (15.1%), and specialty (14.4%) accounted for similar amounts of variability.[32] Melnick et al examined variability by EHR software and organization but did not find differences.[24] In one study, some physician characteristics were assessed for impact on EHR time.[26] Physician specialty, degree of full-time practice, and participation in the National Committee for Quality Assurance were all significant predictors of total physician EHR time per day.

In a recent study, Cross et al showed that the amount of variability in EHR (all Epic) use was greatest among individual physicians compared with variability measured at the specialty and institutional levels.[33] They also showed an association between lower variability in organizations' EHR user behavior and higher physician same-day visit closure rates, a quality indicator. We undertook a similar analysis of a national, multi-institutional sample of EHR event log data from another vendor's EHR system (Cerner), to explore the sources of variability in EHR time quantitatively to identify at what level (i.e., user, practice group, or institutional) variability in EHR time is greatest, and therefore the most potentially fruitful as a focus for improvement.


#

Objective

This study aimed to determine the variability in documentation time (per encounter averages) among primary care physicians, practice groups, and health systems at 26 different health systems using the same EHR vendor system.


#

Methods

Study Design and Participants

This study is an observational retrospective study of physician EHR time use utilizing Cerner (now Oracle Health) Millennium event log data for health systems participating in Cerner's LightsON data platform. Cerner Corporation provided de-identified data at no cost. The institutional review board at the Western Michigan University Homer Stryker, M.D. School of Medicine, determined this study to be exempt from Institutional Review Board (IRB) review.

The inclusion criteria included care delivered in the ambulatory setting by Internal Medicine and Family Medicine specialists from June 1, 2018, to May 31, 2019. We only included practices with at least five physicians and who had at least 500 patient encounters to ensure an adequate sample size. We specifically included only data from physicians and not Nurse Practitioners, Physicians' Assistants, or other advanced practice providers.


#

Outcome Variables

In addition to the total Active EHR Time for each patient encounter, we analyzed Ordering Time, Documentation Time, and Chart Review Time as outcomes of interest because they compose the majority of physician EHR time. Details of the algorithm used to calculate these times has been previously described.[22] Active Time is the total time the physician used the EHR to care for the patient. Order Time includes time spent writing orders. Documentation Time is the time spent recording documentation and creating notes, and Chart Review Time includes work discovering and reviewing clinical results, observations, and notes in the EHR.

To accommodate outliers and the skewed data distribution for outcome variables, we report all times as per-encounter medians rather than means. Outcome variables were log-transformed to meet the normality assumption for statistical modeling.


#

Explanatory Variables

The explanatory variables we used included health system, practice, and physician which are hierarchically nested. Practices are nested within the health system and physicians are nested in practices. For example, the University of New Mexico has several practices such as the University Hospital on Lomas Avenue and the Women's Clinic on Eubank Street. While physicians may provide care at more than one practice most primary care physicians provide the majority of care they deliver at one practice. We did not include specialty or patient as an explanatory variable because the models became too large to execute.


#

Statistical Modeling

For statistical modeling, we employed a three-way nested model to effectively account for the hierarchical and clustering structure of the data. [Fig. 2] illustrates how we nested physician groups in the analysis. The use of this model was critical for accurately capturing the variability in EHR usage patterns across different organizational levels. We used the SAS procedure “VARCOMP” to estimate the variability in total Active Time, Documentation Time, Chart Review time, and Ordering Time, across various health systems, practice groups, and individual clinicians, employing the restricted maximum likelihood approach.

Zoom Image
Fig. 2 Physicians are nested in practices and practices are nested within the health systems.

A key aspect of our model was the incorporation of both fixed and random effects. In this context, “Health System” and “Practice” were treated as fixed effects. This was based on the premise that these entities have consistent, identifiable impacts on EHR usage patterns across the entire dataset. We assumed these factors influenced EHR usage times in a predictable and uniform way across all observations. Furthermore, within this framework, we treated “Practice” nested within “Health System,” and “Physician” nested within “Practice” as random effects. This approach was adopted to capture the inherent variability in individual physicians' EHR usage patterns, acknowledging that such variability is not uniform and can be influenced by a multitude of individual-specific factors, including but not limited to personal efficiency, familiarity with the EHR system, and the complexity of patient cases handled. This nested model structure was pivotal for dissecting the overall variability into its contributing sources at different levels—the individual physician, the practice group they are part of, and the overarching health system. It enabled us to quantify how much of the variation in EHR usage can be attributed to each of these levels, offering a comprehensive view of the factors influencing EHR usage patterns.

We obtained the variability components as percentages. This was achieved by summing the variability components for all sources as the denominator so that each source can be expressed as a percentage of the total variability. We used SAS Studio version 9.04.01M3P062415 running on a Hewlett Packard ProLiant DL380 Gen9 Enterprise Server with an Intel Xeon E5-2699 v4 CPU, 22 physical cores running at 2.2 GHz (44 logical cores), 128 GB memory, and 4 TB disc storage.

The average variability component estimates were calculated across 50 bootstrapped samples of 100,000 encounters for each of the four primary outcomes. We chose 50 samples as this allowed our SAS analysis to run in a reasonable amount of time on each sample (∼4–5 h/sample.) Additionally, we ensured the robustness of our model fits by closely monitoring the convergence of the algorithm used in the PROC VARCOMP procedure and by assessing the estimated variance components, which did not exhibit problematic indicators such as zero or negative variance components or excessively large standard errors. Finally, we used standard errors to estimate the contribution to variability using methods from Obinna et al.[34]


#
#

Results

Descriptive Statistics

In total, 5,444 physicians (Family Medicine: 3,042 and Internal Medicine: 2,422) provided care in 2,285 different practices nested in 26 health systems. The distribution of practice size across the whole sample and in each of the health systems is displayed in [Fig. 3]. The Active Time was 11.10 minutes (median) or 24.28 ± 34.61 minutes (mean and standard deviation). The overall summary statistics for the four time variables are listed in [Table 1].

Zoom Image
Fig. 3 Distribution of practice size in each health system.
Table 1

Summary statistics for the four time variables (in minutes) independent of practice group or health system

Variable (minutes)

N

Minimum

Lower quartile

Median

Upper quartile

Maximum

Active Time

6,601,683

0

2.44

11.10

33.30

1754.47

Documentation Time

4,919,286

0

0.30

3.21

12.59

1131.56

Chart Review Time

6,340,013

0

0.57

2.81

9.60

986.87

Ordering Time

4668466

0

0.55

2.56

7.35

891.54


#

Contributions to Variability

The variability estimates at the health system, practice group, and individual system level are all listed in [Table 2]. The Health System explains 1.29, 3.55, 3.45, and 3.30% of the total variability in Active Time, Documentation Time, Chart Review Time, and Ordering Time, respectively. Higher proportions of the total variability for those outcomes were due to Practice in each health system (7.96, 13.52, 8.39, and 5.57%, respectively). The individual physician in each practice explained the largest proportion of the variability (17.09, 27.49, 17.51, and 19.75%, respectively).

Table 2

Sources of variability (rows) explored in the model and the variability estimates and its percentage from the total variability for each outcome (columns)

Variability estimates (percentage of total variability)

Source of variability

Active Time

Documentation Time

Chart Review Time

Ordering Time

Var (Health System)

0.04168 (1.29%)

0.21277 (3.55%)

0.126448 (3.45%)

0.12350 (3.30%)

Var (Practice in Health System)

0.25646 (7.96%)

0.81028 (13.52%)

0.307115 (8.39%)

0.20841 (5.57%)

Var (Physician in Practice)

0.55022 (17.09%)

1.64781 (27.49%)

0.64066 (17.51%)

0.73968 (19.75%)

Var (Error)

2.37171 (73.65%)

3.32307 (55.44%)

2.58472 (70.64%)

2.67336 (71.39%)


#

Variability among Electronic Health Record Time Components

In the analysis of EHR time components, the data revealed that Documentation Time showed the most significant variability, with values indicating 27.49% at the individual physician level. This was followed by Ordering Time (19.75%), Chart Review Time (17.51%), and Active Time (17.09%). The observed variability in Documentation Time, in particular, was substantially higher than the other components, suggesting distinct patterns in how this aspect of EHR usage varies across individuals and practice settings. It is also noted that the total variability in Active Time, which includes documentation, ordering, and chart review, does not align with a simple aggregation of the variabilities of these components, indicating a complex interplay among them.


#
#

Discussion

Across all four time variables, the calculated variability is greatest among individual physicians while the smallest overall variability is at the health system level. This finding suggests that varying system-level EHR software and hardware environmental differences do not have large effects on EHR usage patterns. Although practice factors had more influence than those from health systems, they also appeared modest when compared with the variability among individual physicians. In fact, Health System plus Practice factors only explained a total of 8.87% variability in Ordering Time while individual physicians accounted for a much larger proportion of the Ordering Time variability (19.75%).

In our analysis, we observed that documentation time showed the most variability compared with ordering and chart review time. This finding is particularly significant in the context of primary care practice, where the variability in documentation demands can be attributed to the diversity and complexity of cases handled by different physicians. In primary care, documentation activities range from brief visit notes to comprehensive chronic disease management, which could explain the observed variability.

Furthermore, it is essential to understand the mathematical relationship between active time and its components (Documentation, Ordering, and Chart Review Time). While active time encompasses all these components, its variability is not merely the sum of their individual variabilities. This is because these components are often interdependent and can correlate with each other, influencing the overall variability in active time. For example, a physician who spends a substantial amount of time on documentation might consequently spend less time on ordering or chart review. This dynamic interaction can result in less variability in total active time than in any individual component. This insight highlights the complex nature of EHR usage patterns and underscores the necessity for nuanced, personalized strategies in enhancing EHR efficiency.

Among individual physicians, Documentation Time variability was relatively greater than the rest. It may be that the EHR user interface supports too many different methods or pathways to accomplish the same documentation tasks. A standard approach to improvement in quality is to reduce process variability. Similarly, it may be beneficial for EHR designers to consider streamlining the methods for documentation, making it easier for a wide variety of users to perform these tasks in a more consistent and efficient manner where appropriate.

Like our study, Cross et al looked at how organizational differences associated with the variability (which the Cross et al's study termed “consistency”) of EHR behavior among primary care physicians.[33] Similar to our study, Cross et al also noted most variability (or less consistency) in physician EHR behavior at the individual physician level.

Our study is also consistent with the findings from a mixed-methods study by Cohen et al where they analyzed the variability of completion of documentation for five clinical documentation categories as well as physicians' attitudes toward the variability they found.[35] The Cohen et al's study also found the greatest level of variability at the individual physician level as well, although used a completely different method of analysis than the analysis in our study, to draw those conclusions. The study found significant variability in the completion of several documentation categories (e.g., review of systems, social history, and problem list). They also reported that some respondents to the interviews felt the variability in documentation completeness interfered with the quality of care and their experience of documenting. Improving documentation efficiency was one of the three themes identified by the American Medical Informatics Association's “25 × 5 Task Force” that “… aims to reduce clinician documentation burden to 25% of the current state.”[36] [37]

Similar to our study that looked at the actual EHR time measurements taken from 26 varying size health systems, Longhurst et al's study analyzed survey responses from “over 72,000 physicians, nurses, advanced practice professionals, and residents across 156 provider organizations.”[32] Longhurst et al used self-reported EHR experience survey data to measure the contributions to EHR use variability and found that user factors accounted for 50.6% of the variability and the EHR software (19.8%), organization (15.1%), and specialty (14.4%) accounted for similar amounts of variability.[32] Although Longhurst et al's study methodology was completely different, their variability analysis reinforces the conclusion of our study that EHR use variability is greatest at the individual users' level.

This dataset included only primary care physicians in the ambulatory setting and was therefore likely more homogeneous than an analysis if performed on data from all physicians' specialties. A more heterogeneous physician group would likely use the EHR more differently than just those from primary care. Therefore, the variability contributed by individual physician EHR usage patterns in this study may be underestimated for the overall mixed physician specialty and mixed patient populations.

The median Active Time observed is similar to that reported in a 2018 study also analyzing Cerner Millennium data.[22] There, Overhage and McCallie, found that internal medicine mean Active Time was 1,099 (3,255) and the family medicine mean was 952 (2,538)—both relatively similar to the combined median calculation in the present study—1457 (2,077).[22] The Overhage and McCallie's study differences may be explained by the greater number of physicians in the study, a different sampling of institutions, and during a somewhat different time period than this analysis.

Just as clinical care rules are designed to reduce the variability of the care patients receive to have a similar reduction in the variability of patient outcomes' quality,[8] [9] [10] the variability of EHR use may prove to be an important target for reduction in improving overall quality of care.

EHR event log analysis overcomes several biases of voluntary reporting and limitations of traditional methods of investigating such errors. In addition, event log-based analyses can occur at a scale that more traditional time–motion analyses are not feasible.

Limitations

There are several important limitations of this study that is reflected by the size of the error terms listed in [Table 2]. This suggests there are other unidentified sources of variability than the four time variables analyzed in this study. For example, the database contained no patient-level data, so it was not possible to explain any of the variability seen due to different patient populations. In addition, we did not have physician-level data on which physicians worked at academic centers or were trainees that likely use the EHR differently and account for some of the variability at the individual and group levels. Future analyses that determine the extent to which note length, templated text, clinic processes/support, dictated text, text macros/“dot phrases,” copy/pasted text, and physician role, may account for a significant portion of the variation at the individual level.

The data came from institutions using the same U.S. vendor's (Cerner) EHR software which means the analysis did not include variability due to differing EHR vendor software version differences. However, these systems do represent a reasonable mix of academic centers and communities from across the United States.[38]

The data were limited to only primary care physicians practicing in the ambulatory setting and only from a relatively small number of health systems. We had no data from physicians' schedules and the Cerner definition of “encounter,” is based on the number of notes signed per day and not actual scheduling data.

We did not remove outliers prior to the analysis and the time data were skewed with some long outliers, perhaps due to Citrix server disconnection, PC or other system malfunctions. To accommodate the skew, we used medians rather than averages as we had no good reference to set a cutoff value of outliers.

Cerner's “LightsON” event log has a set of rules to record the time for each of the defined tasks as accurately as possible. As an example, if you are typing in a patient note and pause typing and do not touch the mouse, the system times out after approximately 30 seconds of inactivity. When you begin typing again, it starts up the timer for note writing. We presume the system keeps incrementing the note time in a similar manner each time you access the note prior to completing/signing off/finalizing it. The system uses a similar approach for writing prescriptions, completing orders, etc. Cerner's “LightsON” database design details, definitions, and heuristics are not published and therefore we cannot analyze or attest to their accuracy and limitations. In addition, the size and complexity of the event logs are substantial and creates practical analysis challenges.

The organizational resources required to build queries and analyze extremely large datasets (on the order of 50–100 GB or more) are substantial. Event log analyses can require high-performance computing infrastructure for timely analyses. Statistical analysis tools are not all capable of efficiently handling datasets this large. Performing these calculations on one large dataset in the cloud that takes advantage of massively parallel server calculation capabilities would be a much more efficient (and expensive) approach than what was available for this project. This approach would allow many more analyses to be performed faster, with finer granularity of detail, and would facilitate the pursuit of larger scientific questions.


#
#

Conclusion

Using large event log datasets from multiple health systems can quantify the amount of process variability of common EHR tasks such as note completion, at various levels of a health system (e.g., the health system as a whole, practice group, and at the individual physician level.) This analysis shows the largest variability of the note completion task in the primary care domain resides at the individual level. This suggests that efforts to reduce variability at the individual level, rather than at the health system or practice group levels may be the most fruitful.


#

Clinical Relevance Statement

This analysis suggests that the greatest amount of variability in use of an EHR system may be at the individual level and suggests that interventions or process improvement efforts focused on individual users rather than system or practice group-level interventions may be the most fruitful for improving quality and usability.


#

Multiple Choice Questions

  1. At what institutional level is the variabilty in the use of EHRs the greatest?

    • the individual physician level.

    • the practice group practice level.

    • the institutional level.

    • depends on the patient population.

    Correct answer: a. the individual physician level

  2. Based on the results of the study, what kind of activity contributed most to the variability of EHR use at the individual physician level?

    • Active time

    • Documentation time

    • Chart review time

    • Ordering time

    Correct answer: b. Documentation time


#
#

Conflict of Interest

None declared.

Acknowledgments

The authors wish to thank Darrell Johnson from Cerner Corporation for making the physician event log data available to the investigator team at no cost and Shamsi Berry, PhD, Theresa McGoff, Kirsten Hickok, and Austin Brubaker, for their help in completing the statistical analysis.

Protection of Human Subjects

This study was determined to be exempt research by the Western Michigan University Homer Stryker M.D. School of Medicine Institutional Review Board.


  • References

  • 1 Adler-Milstein J, Adelman JS, Tai-Seale M, Patel VL, Dymek C. EHR audit logs: a new goldmine for healths services research?. J Biomed Inform 2020; 101: 103343
  • 2 Cohen GR, Boi J, Johnson C, Brown L, Patel V. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J Am Med Inform Assoc 2021; 28 (08) 1676-1682
  • 3 Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30 (01) 144-154
  • 4 Baxter SL, Apathy NC, Cross DA, Sinsky C, Hribar MR. Measures of electronic health record use in outpatient settings across vendors. J Am Med Inform Assoc 2021; 28 (05) 955-959
  • 5 Sinsky CA, Rule A, Cohen G. et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (04) 639-643
  • 6 Munstermann B, Andreas Eckhardt A, Weitzel T. The performance impact of business process standardization. Bus Process Manag 2010; 16 (01) 29-56
  • 7 Goel K, Bandara W, Gable G. A typology of business proceess standardization strategies. Bus Inf Syst Eng 2021; 63 (06) 621-635
  • 8 Lavelle J, Schast A, Keren R. Standardizing care processes and improving quality using pathways and continuous quality improvement. Curr Treat Options Pediatr 2015; 1: 347-358
  • 9 Kurtin P, Stucky E. Standardize to excellence: improving the quality and safety of care with clinical pathways. Pediatr Clin North Am 2009; 56 (04) 893-904
  • 10 Téoule P, Birgin E, Mertens C. et al. Clinical pathways for oncological gastrectomy: Are they a suitable instrument for process standardization to improve process and outcome quality for patients undergoing gastrectomy? A retrospective cohort study. Cancers (Basel) 2020; 12 (02) 434
  • 11 de Hoop T, Neumuth T. Evaluating electronic health record limitations and time expenditure in a German medical center. Appl Clin Inform 2021; 12 (05) 1082-1090
  • 12 Moy AJ, Aaron L, Cato KD. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: Using time-motion data to understand documentation burden. Appl Clin Inform 2021; 12 (05) 1002-1013
  • 13 Tai-Seale M, McGuire TG, Zhang W. Time allocation in primary care office visits. Health Serv Res 2007; 42 (05) 1871-1894
  • 14 Tai-Seale M, McGuire T, Colenda C, Rosen D, Cook MA. Two-minute mental health care for elderly patients: inside primary care visits. J Am Geriatr Soc 2007; 55 (12) 1903-1911
  • 15 Zheng K, Guo MH, Hanauer DA. Using the time and motion method to study clinical work processes and workflow: methodological inconsistencies and a call for standardized research. J Am Med Inform Assoc 2011; 18 (05) 704-710
  • 16 Heinen Y, Wolff G, Klein K. et al. Process standardization in high-risk coronary interventions is associated with quality of care measures. J Invasive Cardiol 2022; 34 (10) E743-E749
  • 17 Jaulin F, Lopes T, Martin F. Standardised handover process with checklist improves quality and safety of care in the postanaesthesia care unit: the Postanaesthesia Team Handover trial. Br J Anaesth 2021; 127 (06) 962-970
  • 18 Philips K, Zhou R, Lee DS. et al. Implementation of a standardized approach to improve the pediatric discharge medication process. Pediatrics 2021; 147 (02) e20192711
  • 19 Fontánez-Nieves TD, Frost M, Anday E, Davis D, Cooperberg D, Carey AJ. Prevention of unplanned extubations in neonates through process standardization. J Perinatol 2016; 36 (06) 469-473
  • 20 Rozich JD, Howard RJ, Justeson JM, Macken PD, Lindsay ME, Resar RK. Standardization as a mechanism to improve safety in health care. Jt Comm J Qual Saf 2004; 30 (01) 5-14
  • 21 Arndt BG, Beasley JW, Watkinson MD. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (05) 419-426
  • 22 Overhage JM, McCallie Jr D. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (03) 169-174
  • 23 Overhage JM, Johnson KB. Pediatrician electronic health record time use for outpatient encounters. Pediatrics 2020; 146 (06) e20194017
  • 24 Melnick ER, Ong SY, Fong A. et al. Characterizing physician EHR use with vendor derived data: a feasibility study and cross-sectional analysis. J Am Med Inform Assoc 2021; 28 (07) 1383-1392
  • 25 Rotenstein LS, Holmgren AJ, Downing NL, Longhurst CA, Bates DW. Differences in clinician electronic health record use across adult and pediatric primary care specialties. JAMA Netw Open 2021; 4 (07) e2116375
  • 26 Tai-Seale M, Olson CW, Li J. et al. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Aff (Millwood) 2017; 36 (04) 655-662
  • 27 Hron JD, Lourie E. Have you got the time? Challenges using vendor electronic health record metrics of provider efficiency. J Am Med Inform Assoc 2020; 27 (04) 644-646
  • 28 Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014; 12 (06) 573-576
  • 29 Melnyk B. National academy of medicine's action collaborative on clinician well-being and resilience: a solution-focused strategy is designed to curtail the burnout epidemic. Am Nurse Today 2019; 14 (04) 61-64
  • 30 Tai-Seale M, Dillon EC, Yang Y. et al. Physicians' well-being linked to in-basket messages generated by algorithms in electronic health records. Health Aff (Millwood) 2019; 38 (07) 1073-1078
  • 31 Tutty MA, Carlasare LE, Lloyd S, Sinsky CA. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc 2019; 26 (07) 673-677
  • 32 Longhurst CA, Davis T, Maneker A. et al; Arch Collaborative. Local investment in training drives electronic health record user satisfaction. Appl Clin Inform 2019; 10 (02) 331-335
  • 33 Cross DA, Holmgren AJ, Apathy NC. The role of organizations in shaping physician use of electronic health records. Health Serv Res 2024; 59 (01) e14203
  • 34 Obinna M, Ogoke U, Nduka E. Automation of balanced nested design; NeDPy. Int J Stat Appl 2020; 10 (01) 17-23
  • 35 Cohen GR, Friedman CP, Ryan AM, Richardson CR, Adler-Milstein J. Variation in physicians' electronic health record documentation and potential patient harm from that variation. J Gen Intern Med 2019; 34 (11) 2355-2367
  • 36 Levy DR, Sloss EA, Chartash D. et al. Reflections on the documentation burden reduction amia plenary session through the lens of 25 × 5. Appl Clin Inform 2023; 14 (01) 11-15
  • 37 Hobensack M, Levy DR, Cato K. et al. 25 × 5 symposium to reduce documentation burden: report-out and call for action. Appl Clin Inform 2022; 13 (02) 439-446
  • 38 DeShazo JP, Hoffman MA. A comparison of a multistate inpatient EHR database to the HCUP Nationwide Inpatient Sample. BMC Health Serv Res 2015; 15: 384

Address for correspondence

Philip J. Kroth, MD, MSc
Department of Biomedical Informatics, Western Michigan University Homer Stryker M.D. School of Medicine
300 Portage Street, Kalamazoo, MI 49007
United States   

Publication History

Received: 02 May 2023

Accepted: 30 January 2024

Article published online:
20 March 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Adler-Milstein J, Adelman JS, Tai-Seale M, Patel VL, Dymek C. EHR audit logs: a new goldmine for healths services research?. J Biomed Inform 2020; 101: 103343
  • 2 Cohen GR, Boi J, Johnson C, Brown L, Patel V. Measuring time clinicians spend using EHRs in the inpatient setting: a national, mixed-methods study. J Am Med Inform Assoc 2021; 28 (08) 1676-1682
  • 3 Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30 (01) 144-154
  • 4 Baxter SL, Apathy NC, Cross DA, Sinsky C, Hribar MR. Measures of electronic health record use in outpatient settings across vendors. J Am Med Inform Assoc 2021; 28 (05) 955-959
  • 5 Sinsky CA, Rule A, Cohen G. et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (04) 639-643
  • 6 Munstermann B, Andreas Eckhardt A, Weitzel T. The performance impact of business process standardization. Bus Process Manag 2010; 16 (01) 29-56
  • 7 Goel K, Bandara W, Gable G. A typology of business proceess standardization strategies. Bus Inf Syst Eng 2021; 63 (06) 621-635
  • 8 Lavelle J, Schast A, Keren R. Standardizing care processes and improving quality using pathways and continuous quality improvement. Curr Treat Options Pediatr 2015; 1: 347-358
  • 9 Kurtin P, Stucky E. Standardize to excellence: improving the quality and safety of care with clinical pathways. Pediatr Clin North Am 2009; 56 (04) 893-904
  • 10 Téoule P, Birgin E, Mertens C. et al. Clinical pathways for oncological gastrectomy: Are they a suitable instrument for process standardization to improve process and outcome quality for patients undergoing gastrectomy? A retrospective cohort study. Cancers (Basel) 2020; 12 (02) 434
  • 11 de Hoop T, Neumuth T. Evaluating electronic health record limitations and time expenditure in a German medical center. Appl Clin Inform 2021; 12 (05) 1082-1090
  • 12 Moy AJ, Aaron L, Cato KD. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: Using time-motion data to understand documentation burden. Appl Clin Inform 2021; 12 (05) 1002-1013
  • 13 Tai-Seale M, McGuire TG, Zhang W. Time allocation in primary care office visits. Health Serv Res 2007; 42 (05) 1871-1894
  • 14 Tai-Seale M, McGuire T, Colenda C, Rosen D, Cook MA. Two-minute mental health care for elderly patients: inside primary care visits. J Am Geriatr Soc 2007; 55 (12) 1903-1911
  • 15 Zheng K, Guo MH, Hanauer DA. Using the time and motion method to study clinical work processes and workflow: methodological inconsistencies and a call for standardized research. J Am Med Inform Assoc 2011; 18 (05) 704-710
  • 16 Heinen Y, Wolff G, Klein K. et al. Process standardization in high-risk coronary interventions is associated with quality of care measures. J Invasive Cardiol 2022; 34 (10) E743-E749
  • 17 Jaulin F, Lopes T, Martin F. Standardised handover process with checklist improves quality and safety of care in the postanaesthesia care unit: the Postanaesthesia Team Handover trial. Br J Anaesth 2021; 127 (06) 962-970
  • 18 Philips K, Zhou R, Lee DS. et al. Implementation of a standardized approach to improve the pediatric discharge medication process. Pediatrics 2021; 147 (02) e20192711
  • 19 Fontánez-Nieves TD, Frost M, Anday E, Davis D, Cooperberg D, Carey AJ. Prevention of unplanned extubations in neonates through process standardization. J Perinatol 2016; 36 (06) 469-473
  • 20 Rozich JD, Howard RJ, Justeson JM, Macken PD, Lindsay ME, Resar RK. Standardization as a mechanism to improve safety in health care. Jt Comm J Qual Saf 2004; 30 (01) 5-14
  • 21 Arndt BG, Beasley JW, Watkinson MD. et al. Tethered to the EHR: Primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (05) 419-426
  • 22 Overhage JM, McCallie Jr D. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (03) 169-174
  • 23 Overhage JM, Johnson KB. Pediatrician electronic health record time use for outpatient encounters. Pediatrics 2020; 146 (06) e20194017
  • 24 Melnick ER, Ong SY, Fong A. et al. Characterizing physician EHR use with vendor derived data: a feasibility study and cross-sectional analysis. J Am Med Inform Assoc 2021; 28 (07) 1383-1392
  • 25 Rotenstein LS, Holmgren AJ, Downing NL, Longhurst CA, Bates DW. Differences in clinician electronic health record use across adult and pediatric primary care specialties. JAMA Netw Open 2021; 4 (07) e2116375
  • 26 Tai-Seale M, Olson CW, Li J. et al. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Aff (Millwood) 2017; 36 (04) 655-662
  • 27 Hron JD, Lourie E. Have you got the time? Challenges using vendor electronic health record metrics of provider efficiency. J Am Med Inform Assoc 2020; 27 (04) 644-646
  • 28 Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med 2014; 12 (06) 573-576
  • 29 Melnyk B. National academy of medicine's action collaborative on clinician well-being and resilience: a solution-focused strategy is designed to curtail the burnout epidemic. Am Nurse Today 2019; 14 (04) 61-64
  • 30 Tai-Seale M, Dillon EC, Yang Y. et al. Physicians' well-being linked to in-basket messages generated by algorithms in electronic health records. Health Aff (Millwood) 2019; 38 (07) 1073-1078
  • 31 Tutty MA, Carlasare LE, Lloyd S, Sinsky CA. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc 2019; 26 (07) 673-677
  • 32 Longhurst CA, Davis T, Maneker A. et al; Arch Collaborative. Local investment in training drives electronic health record user satisfaction. Appl Clin Inform 2019; 10 (02) 331-335
  • 33 Cross DA, Holmgren AJ, Apathy NC. The role of organizations in shaping physician use of electronic health records. Health Serv Res 2024; 59 (01) e14203
  • 34 Obinna M, Ogoke U, Nduka E. Automation of balanced nested design; NeDPy. Int J Stat Appl 2020; 10 (01) 17-23
  • 35 Cohen GR, Friedman CP, Ryan AM, Richardson CR, Adler-Milstein J. Variation in physicians' electronic health record documentation and potential patient harm from that variation. J Gen Intern Med 2019; 34 (11) 2355-2367
  • 36 Levy DR, Sloss EA, Chartash D. et al. Reflections on the documentation burden reduction amia plenary session through the lens of 25 × 5. Appl Clin Inform 2023; 14 (01) 11-15
  • 37 Hobensack M, Levy DR, Cato K. et al. 25 × 5 symposium to reduce documentation burden: report-out and call for action. Appl Clin Inform 2022; 13 (02) 439-446
  • 38 DeShazo JP, Hoffman MA. A comparison of a multistate inpatient EHR database to the HCUP Nationwide Inpatient Sample. BMC Health Serv Res 2015; 15: 384

Zoom Image
Fig. 1 Factors potentially impacting a physician's EHR use time. EHR, electronic health record; HIPAA, Health Insurance Portability and Accountability Act.
Zoom Image
Fig. 2 Physicians are nested in practices and practices are nested within the health systems.
Zoom Image
Fig. 3 Distribution of practice size in each health system.