CC BY-NC-ND 4.0 · Appl Clin Inform 2023; 14(05): 944-950
DOI: 10.1055/a-2187-3243
State of the Art/Best Practice Paper

Identifying and Addressing Barriers to Implementing Core Electronic Health Record Use Metrics for Ambulatory Care: Virtual Consensus Conference Proceedings

Deborah R. Levy
1   Department of Veterans Affairs, VA Connecticut Healthcare System, West Haven, Connecticut, United States
2   Section of Biomedical Informatics and Data Sciences, Yale University School of Medicine, New Haven, Connecticut, United States
,
Amanda J. Moy
3   Department of Biomedical Informatics, Columbia University, New York, New York, United States
,
Nate Apathy
4   National Center for Human Factors in Healthcare, MedStar Health Research Institute, Washington, District of Columbia, United States
5   Center for Biomedical Informatics, Regenstrief Institute, Indianapolis, Iowa, United States
,
Julia Adler-Milstein
6   Department of Medicine, Center for Clinical Informatics and Improvement Research, University of California, San Francisco, California, United States
,
Lisa Rotenstein
7   Division of General Internal Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
8   Harvard Medical School, Boston, Massachusetts, United States
,
Bidisha Nath
9   Department of Emergency Medicine, Yale University School of Medicine, New Haven, Connecticut, United States
,
S. Trent Rosenbloom
10   Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee, United States
,
Thomas Kannampallil
11   Department of Anesthesiology, Washington University School of Medicine, St. Louis, Missouri, United States
12   Institute for Informatics, Data Science, and Biostatistics (I2DB), Washington University School of Medicine, St. Louis, Missouri, United States
,
Rebecca G. Mishuris
7   Division of General Internal Medicine, Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
8   Harvard Medical School, Boston, Massachusetts, United States
13   Digital, Mass General Brigham, Boston, Massachusetts, United States
,
Aram Alexanian
14   Novant Health, Charlotte, North Carolina, United States
,
Amber Sieja
15   Department of General Internal Medicine, University of Colorado School of Medicine, Aurora, Colorado, United States
,
Michelle R. Hribar
16   Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, United States
,
Jigar S. Patel
17   Oracle Corporation, Kansas City, Missouri, United States
,
Christine A. Sinsky
18   American Medical Association, Chicago, Illinois, United States
,
Edward R. Melnick
2   Section of Biomedical Informatics and Data Sciences, Yale University School of Medicine, New Haven, Connecticut, United States
9   Department of Emergency Medicine, Yale University School of Medicine, New Haven, Connecticut, United States
› Author Affiliations
Funding This work was supported by the American Medical Association Practice Transformation Initiatives (contract number 19449). D.R.L. is supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Academic Affiliations, Office of Research and Development, with resources and the use of facilities at the VA Connecticut Healthcare System, West Haven, Connecticut (CIN-13-407). E.R.M. reports receiving grants from the National Institute on Drug Abuse and the Agency for Healthcare Research and Quality unrelated to this work.
 

Abstract

Precise, reliable, valid metrics that are cost-effective and require reasonable implementation time and effort are needed to drive electronic health record (EHR) improvements and decrease EHR burden. Differences exist between research and vendor definitions of metrics.

Process We convened three stakeholder groups (health system informatics leaders, EHR vendor representatives, and researchers) in a virtual workshop series to achieve consensus on barriers, solutions, and next steps to implementing the core EHR use metrics in ambulatory care.

Conclusion Actionable solutions identified to address core categories of EHR metric implementation challenges include: (1) maintaining broad stakeholder engagement, (2) reaching agreement on standardized measure definitions across vendors, (3) integrating clinician perspectives, and (4) addressing cognitive and EHR burden. Building upon the momentum of this workshop's outputs offers promise for overcoming barriers to implementing EHR use metrics.


#

Background and Significance

In 2020, seven core electronic health record (EHR) use measures were proposed by a multidisciplinary stakeholder group to quantify ambulatory EHR use, evaluate the practice environment, and assess EHR burden.[1] Since then, there have been more than 100 peer-reviewed publications addressing EHR burden, garnering increased attention from across stakeholder groups.[2] [3] Intended to quantify practice efficiency, teamwork, and other contributors to professional well-being and promote cross-study comparisons,[1] wide-scale implementation of core metrics in the research setting as well as routine operations assessment have faced challenges across practice settings.[4]

Precise, reliable, valid metrics that are cost-effective and require reasonable implementation time and effort are needed to drive EHR improvements.[5] Differences exist between investigator-defined and vendor definitions of metrics.[2] Although current vendor-derived metrics seek to offer actionable benchmarks on EHR use, they face validity and reliability concerns due to limited transparency, availability, accessibility, and standardization.[6] For example, agreement on how to conceptualize time spent in the EHR outside of time a clinician is scheduled—one construct that has been linked to clinician burnout[7]—has encountered limitations including generalizability across vendors and may not effectively separate time spent on direct patient care from time strictly dedicated to the EHR.[8] Scientifically sound evaluation will require stakeholder consensus on the optimal approaches to harness these collective resources to ensure that metrics are meaningful and useful to all stakeholders.

Therefore, we convened three stakeholder groups (health system informatics and operational leaders, EHR vendor representatives and audit log experts from three major vendors, and researchers) in a virtual workshop series to achieve consensus on barriers and solutions to implementing the core EHR use metrics in ambulatory care. To our knowledge this was this first workshop to convene a nationally representative group from these three stakeholder groups to address implementation barriers to EHR use measurement.


#

Process

We organized two, 2-hour workshop sessions (November 2022 and January 2023) in a collaborative and interactive virtual workspace to develop consensus on barriers (Session 1), solutions and next steps (Session 2) to overcome implementation challenges of core EHR use metrics ([Fig. 1]). The workshop planning steering committee (subsequently referred to as “the committee”) included audit-log implementation and researcher experts. The committee designed and ran the workshop series, with a focus on identifying and developing four categories of EHR metric implementation barriers informed by their own implementation experiences with the original seven metrics proposed to guide the workshop series.[1] Each session followed a modified Delphi process,[9] [10] employing a collaborative and interactive virtual workspace. The attendees from the three stakeholder groups were divided into four workgroups established based on major implementation challenge categories: (1) defining and interpreting schedules, (2) validity of EHR use, (3) inbox management, and 4) undivided attention. We performed purposive recruitment drawing from the professional networks of the committee to assemble a diverse and representative group accounting for varied perspectives from three stakeholder groups: health care system informatics leaders, EHR developers leading EHR use measurement, and EHR use researchers. We initially recruited by targeted emails for categories (EHR vendors and health systems operational leaders) in which there are known experts in these domains. We expanded our approach with select snowball sampling if an identified content expert was not available.

Zoom Image
Fig. 1 Workshop consensus process map. Steps in the design and analysis of the workshop series, including four working groups. EHR, electronic health record.

Prior to each session, preliminary questions were assembled for each workgroup by the committee. After each session, we reviewed meeting outputs, summarized findings, and prepared next steps. A professional facilitator with domain experience helped to plan and lead each session.[11] Content experts from each of the four categories led respective working groups.[12]


#

Proceedings by Working Group

Defining and Interpreting Schedules

To establish standardized definitions for defining and interpreting schedules, the workgroup proposed dividing EHR activity into four categories[13]: (1) EHR visit work (face-to-face patient care), (2) visit documentation work in patient's presence not captured by the EHR that could potentially be measured using other systems, (3) between visit EHR work (indirect like inbox, laboratory result review, or interdisciplinary case review), and (4) nonclinical EHR work (e.g., academic and administrative; [Table 1]).

Table 1

Barriers, solutions, and next steps for each of four electronic health record metrics consensus process working groups

Key themes across each of four working groups

Working group

Barriers

Solutions

Next steps

Defining and interpreting schedules

• Need standard definitions: work time, outside work, work-day, and full-time equivalent

• Lack of integration of scheduling platform

• Capturing nonclinical work in EHR

Attributing credit for EHR work across multiple users

• 3rd party scheduling tech alignment

• Define 4 categories: visit work, visit work not seen by EHR, between visit work, and nonclinical work

• Remove or flag nonclinical work from logs

• Track teamwork

• Reach agreement on visit-based vs. time-based vs. task-based measurement

• Link work outside EHR to visit

• Define clinical and nonclinical time

• Standardize how to apportion interdisciplinary work

Validity of EHR use

• What is “active time”? No existing standards for EHR use time measures

• Readily available EHR use measures lack context (clinical context, process context, individual preferences, individual development, physician variability of practice)

• No central repository of validated metrics

• Applying quality measures development frameworks to EHR metric measures

• Incorporate multilevel data (e.g., organization, clinic setting, individual user) -> add context to EHR use measures

• Develop a library of generalizable EHR use measures for clinical tasks or activities

• Elicit from clinicians the dimensions of EHR time that are burdensome for that individual (e.g., via “burden scenarios” survey).

• Build linkages with contextual data into EHR use time database construction

Identify high-priority clinical actions or tasks (to develop standardized measurements)

Inbox management

• Message content and appropriateness (what does good look like?

• Message work complexity

• Switching—Screen, Task, Person

• Lack of team coordination of message handing

• Need to define “what is inbox work”

• Classification of workflow types and message categories, including filtering of messages by team

• Consider cognitive burden, including by message type

• Create workflows for teams to enable clinicians to reduce administrative burden

• Employ technology such as algorithms to support messages

• Develop categories of messages (e.g., those handled with one click, messages resolved without needing physician)

• Calculate number of messages solved “within one screen”

• Define metrics that capture teamwork

Undivided attention

• Defining undivided attention (attention to patient vs. attention to task)

• The multitasking myth—mental model of attention

• Capturing interruptions

• Privacy constraints when using technology to capture events outside EHR

• Identifying markers of cognitive interruptions or distractions

• Consider ways to capture work done outside of EHR

• Define and explore task switching: (intratask surfing; chart switching; interrupted/abandoned tasks e.g., orders)

• Start by capturing frequency of events:

   - Chart switching and returning

    - Switching screens during task

• Consider latency of response (e.g., minutes vs. seconds vs. fragments of sections or milliseconds)

Abbreviation: EHR, electronic health record.


Note: For each of the four workshop working groups, the table includes barriers, solutions, and action items, which could inform future work, as outlined by workshop stakeholders.


Barriers

We identified variation in how work is defined both during and after scheduled clinical time as a major barrier. For example, EHR activities associated with administrative or teaching responsibilities may be indistinguishable from those related to direct patient care. Identifying clinical schedules may be challenging due to third-party scheduling platforms. Attributing credit for EHR work performed across multiple users in a clinical team was an additional barrier noted by this and other work groups.


#

Solutions

Reconciling scheduling information captured outside the EHR to facilitate removal of nonclinical work time is critical to measuring EHR use related to direct clinical care. Consistent definitions across EHR work categories will be necessary to distinguish clinical and nonclinical EHR activities. Specialty- and context-specific full-time equivalent and clinical workday definitions would support better comparisons of EHR work across clinical contexts. Tracking teamwork was proposed as a potential solution to measure work performed among interdisciplinary team members.


#

Next Steps

Five solutions-generating steps were identified: (1) engage key stakeholders (including schedulers and operational stakeholders) to reach agreement on standard clinical schedule definitions such as differentiating visit-based, time-based, and task-based measurement; (2) link or make efforts to capture visit-related work outside the EHR—especially between visits—to better estimate actual clinical workload; (3) establish consistent definitions of clinical and nonclinical time; (4) standardize how to factor time by care team member and role, to better capture teamwork and assign credit for work.


#
#

Validity of Electronic Health Record “Active Time”

Knowing when a clinician is actively using the EHR is a key underpinning of EHR use measurement. One overarching consideration is the lack of standardization across EHR vendors on system activity “active time,” which can limit comparison of measures across organizations utilizing different vendor systems [Table 1].[14] [15]

Barriers

We identified several barriers to improving the validity of “active time” metrics within the EHR, chiefly related to contextualizing active use. Readily available metrics could be improved with additional information about: (1) clinical context (e.g., ambulatory vs. inpatient); (2) process context (e.g., rooming vs. active consultation with patients); (3) individual preferences (e.g., users preference to document at the end vs. documenting throughout the clinic day); (4) individual development (e.g., learning curves for new users, workflows, or clinical processes); and (5) physician variability of practice (e.g., comprehensive chart review vs. their prior visit notes). Lastly, there is demand for a central repository of validated metrics.


#

Solutions

Participants noted the opportunity to apply quality measure development methodology to EHR metric development as a scientific approach to enhance transparency, provide concrete guidance for establishing validity, and support reuse of metrics across organizations. Additionally, improved contextualization of EHR use measures could be achieved by incorporating multilevel data related to the organization, clinical setting, and individual user, which would support better normative interpretation of EHR use metrics specific to user preferences and variation in practice patterns. Finally, developing a library of generalizable EHR use metrics for clinical tasks or activities was proposed as a key opportunity to address the variability of existing measures and better align research across teams.


#

Next Steps

Three solutions-generating steps were identified: (1) identify high-priority clinical actions or tasks as a starting point for standardized measure development using a quality measure development framework; (2) survey individual clinicians to elicit EHR time dimensions that they find specifically burdensome via a tool assessing standardized burden scenarios; and (3) build linkages with contextual data into existing EHR usage data.


#
#

Inbox Management

EHR inbox tasks have consumed an increasing amount of clinician time—including messages from patients (e.g., patient portal messages), internal messages (e.g., within-care team messages), and responses to laboratory result or medication refill inquiries—with a compound effect observed since the coronavirus disease pandemic. While the inbox is a defined EHR component, tasks that begin or end in the inbox often require work away from the inbox screen to resolve, which is not currently captured as a part of existing inbox time metrics [Table 1].

Barriers

We identified four key barriers related to measurement of inbox-related EHR work: (1) agreement on different inbox components, their prioritization, and translation into measures; (2) inaccuracy of current inbox metrics due to failure to capture relationships between inbox work and other types of clinician work (both EHR and non-EHR based); (3) inconsistent inbox metrics across vendors and EHR products; (4) lack of accurate capture of team-based support for inbox work.


#

Solutions

The foundational opportunity proposed was to define a typology for inbox work, including the different message categories and adjacent tasks (e.g., connecting other EHR tasks that resulted from an inbox message). For example, a patient message about imaging results might require review of the imaging, its results, consultant documentation, and pertinent laboratory results on separate screens, all prior to responding to the patient's message. This could be extended to also measure: (1) anticipated cognitive burden associated with inbox work and (2) proportion of inbox messages that can be resolved without leaving the inbox.


#

Next Steps

Two early use-case examples were offered as near-term goals: messages handled within the inbox and messages resolved without necessitating physician involvement. Next steps identified to begin solution development were: (1) develop message categories by degree of work (e.g., those handled in one click or resolved without needing a clinician) and by content or complexity (which may involve communication and action within the care team and/or with the patient or caregiver) and (2) develop new measures scaled across vendors, building upon complexity ranking, such as percentage of message reconciled within one screen.


#
#

Undivided Attention

Distractions by intrusive alerts, messages, or unrelated tasks can interfere with a clinician's ability to provide safe and efficient care.[16] Yet, the clinical environment is often a cacophony of noise and interruptions, in which the physician attempts to listen and care for the patient while simultaneously reviewing and entering data in the EHR.[17] The lack of process coupling (i.e., display fragmentation where necessary information is scattered and buried across multiple screens rather than concisely presented),[18] combined with frequent distractions and interruptions, results in cognitive overload and a hazardous care environment.[17] An existing measure quantifying the outer envelope of time available for undivided attention is available.[17] More granular measures are also needed [Table 1].

Barriers

Standardized measures of undivided attention do not currently exist and are not included in off-the-shelf vendor metrics. Many features of the environment that impact undivided attention, such as team composition (i.e., team size, skill level, and stability), ambient noise, and cognitive overload are not routinely measured. Some promising technologies, including eye tracking, raise privacy concerns when implemented in routine clinical care.


#

Solutions

Solutions in this emerging domain focus on defining tasks and measures regarding the lack of attention or focus. Markers and a common taxonomy of cognitive load and interruptions are needed. Proxies for inattention or indirect markers of divided attention could be measured, such as the latency time for instant secure messaging responses. For example, if average latency of a message response were as short as 30 seconds, then one might infer that physicians are commonly interrupted mid-thought or mid-task to respond quickly.


#

Next Steps

We identified three initial measures to approximate undivided attention: (1) frequency of screen switching during a task, (2) frequency of switching from and returning to tasks, and (3) latency of response to interruptions, such as instant messaging.


#
#
#

Discussion

We achieved consensus on potential design solutions to consider when implementing four core categories of EHR use metrics in ambulatory care through a virtual workshop series with health system informatics leaders, EHR vendor representatives, and researchers ([Table 1]). These solutions may be useful for health systems leaders, researchers, and EHR vendors to consider when designing EHR metrics. We identified common themes across groups including an urgent need for standardized definitions of EHR use measure elements, which aligns with the existing literature.[5] [6] For example, priority was raised for defining what a clinical day/shift or clinical role included, operationalizing metrics to evaluate EHR active time, determining the scope and complexity of inbox management tasks, and establishing units of time latency, task, and attention including an ontology of terms to standardize future work. Understanding the interaction between team members, and the interdisciplinary aspects of care, which are not fully captured in current metrics, was also offered as an opportunity for action when designing and implementing EHR metrics.[16] As interdisciplinary care increases, clarifying the contributions of different clinical team members (and patients) was recognized as a key next step.

Another common theme was capturing clinician effort both within and between EHR tasks, as well as capturing tasks and activities (and effort) that are not currently quantifiable via EHR use metrics. Concerns for privacy, feasibility of implementation and use outside of research settings, and interpretations of activities outside the EHR captured without context were raised, while the need to account for this work that is not currently captured was universal. EHR metric design and implementation solutions were anticipated to be complex, involving a wide range of multidisciplinary stakeholders including academic researchers, members of the vendor community, and those at the forefront of health system implementation. This workshop series opened and fostered a dialogue between researchers and designers of EHR metrics from the vendor community, as well as those who are using and implementing the metrics in health systems to consider the barriers that align or differ from across the United States. To overcome the implementation barriers identified in this workshop series: (1) policymakers should consider mandating data definitions and standards in EHR use measurement to allow reliable measurement across groups and at scale, (2) organizational leaders and EHR vendor representatives should continue to engage with the research community to ensure measures are performing as intended. Indeed, regular meetings of the assembled workgroup or similar stakeholders would catalyze and accelerate the effectiveness of future EHR use measurement efforts. Including only participants from the United States limits generalizability of the findings for international EHRs or clinical settings.

Guiding principles included optimizing the clinician EHR experience, recognizing the lack of a single outcome of interest that measures EHR use, and the need for common measures or perspectives that capture the range of practice styles, and unique clinician roles, responsibilities, workflow, and documentation practices. Next steps are highlighted in [Table 1], with continued collaboration across stakeholder groups identified as a critical element. Once implemented, EHR use metrics could be applied to evaluate interventions seeking to mitigate EHR burden such as team-based interventions or specific changes to the practice environment.


#

Conclusion

We developed actionable solutions to address each of four categories of EHR metric implementation challenges, during a two-session virtual workshop series. Common themes across domains included: (1) maintaining broad stakeholder engagement, (2) reaching agreement on standardized measure definitions across vendors, (3) integrating clinician perspectives, and (4) addressing cognitive and EHR burden. Building upon the momentum of this workshop's outputs offers promise for overcoming barriers to implementing core EHR use metrics.


#

Clinical Relevance Statement

EHR burden is challenging to measure, and EHR audit log data have been used to create metrics to measure clinician activities. Identifying solutions to implementation barriers will benefit multiple stakeholders bridging researchers, health systems implementation leaders, and EHR vendor representatives. Stakeholders participating in a workshop series agreed for the need to reach agreement on standardized measure definitions across vendors, integrating clinician perspectives, and address cognitive and EHR burden.


#

Multiple-Choice Questions

  1. EHR metrics use log data collected from clinician actions and tasks. Regarding the design and implementation of metrics from log data:

    • The design is the same for all EHR vendors and products

    • EHR metrics can be easily calculated from EHR log data

    • There is a notable for a lack of standardization of metrics using log data

    • The process is similar from research and EHR vendor perspectives

    Correct Answer: The correct answer is option c. There is a notable lack of standardization of how metrics are developed or designed from the underlying log data, which stakeholders participating in the workshop from the EHR vendor, research, and health system implementation communities agreed upon. Because each vendor develops their own metrics, based on their own formulas, it is therefore challenging to compare the same metric concept across vendor products. Workshop attendees agreed on the need for standardizing both the definitions of the goal of the metric (i.e., how a clinical shift is defined if the schedule is not contained within the EHR, or how the active time in an EHR is determined) as well as the metric design from the audit log data.

  2. Definitions to standardize how metrics are designed are needed for:

    • Validity of EHR use

    • Determining a clinical shift or block in clinic

    • Measuring changes in attention during EHR work

    • Inbox message complexity and handing

    • All of the above

    Correct Answer: The correct answer is option e. Terms such as “active EHR time” or a “clinical shift” are not standardized. Researchers in audit logs may calculate metrics differently than the vendor community, which can lead to both confusion when attempts are made to compare the metrics and contribute to a lack of generalizability.


#
#

Conflict of Interest

C.A.S. is employed by the American Medical Association. The opinions expressed in this article are those of the authors and should not be interpreted as the American Medical Association policy. J.S.P. reports employment by Oracle Corporation. The remaining authors have no conflicts of interest related to this work.

Acknowledgments

We acknowledge the contributions of our workshop series stakeholder participants.

Note

The contents of this manuscript represent the view of the authors and do not necessarily reflect the position or policy of the U.S. Department of Veterans Affairs or the United States Government.


Protection of Human and Animal Subjects

No human subjects were involved in this work.


  • References

  • 1 Sinsky CA, Rule A, Cohen G. et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (04) 639-643
  • 2 Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30 (01) 144-154
  • 3 Levy DR, Sloss EASA, Chartash D. et al. Reflections on the Documentation Burden Reduction AMIA Plenary Session through the Lens of 25 × 5. Appl Clin Inform 2023; 14 (01) 11-15
  • 4 Melnick ER, Ong SY, Fong A. et al. Characterizing physician EHR use with vendor derived data: a feasibility study and cross-sectional analysis. J Am Med Inform Assoc 2021; 28 (07) 1383-1392
  • 5 Melnick ER, Sinsky CA, Krumholz HM. Implementing measurement science for electronic health record use. JAMA 2021; 325 (21) 2149-2150
  • 6 Kannampallil T, Adler-Milstein J. Using electronic health record audit log data for research: insights from early efforts. J Am Med Inform Assoc 2022; 30 (01) 167-171
  • 7 Tran B, Lenhart A, Ross R, Dorr DA. Burnout and EHR use among academic primary care physicians with varied clinical workloads. AMIA Jt Summits Transl Sci Proc 2019; 2019: 136-144
  • 8 Arndt BG, Micek MA, Rule A, Shafer CM, Baltus JJ, Sinsky CA. Refining vendor-defined measures to accurately quantify EHR workload outside time scheduled with patients. Ann Fam Med 2023; 21 (03) 264-268
  • 9 Melnick ER, Nielson JA, Finnell JT. et al. Delphi consensus on the feasibility of translating the ACEP clinical policies into computerized clinical decision support. Ann Emerg Med 2010; 56 (04) 317-320
  • 10 Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: how to decide its appropriateness. World J Methodol 2021; 11 (04) 116-129
  • 11 Rossetti SC, Rosenbloom S, Levy DR. et al. 25 × 5 symposium drives ongoing efforts to reduce documentation burden on U.S. clinicians: final summary report 2021 . Accessed December 5, 2021 at: https://www.dbmi.columbia.edu/wp-content/uploads/2021/12/25×5-Summary-Report.pdf or https://brand.amia.org/m/dbde97860f393e1/original/25×5-Summary-Report.pdf
  • 12 Hobensack M, Levy DR, Cato K. et al. 25 × 5 Symposium to reduce documentation burden: report-out and call for action. Appl Clin Inform 2022; 13 (02) 439-446
  • 13 Baxter SL, Apathy NC, Cross DA, Sinsky C, Hribar MR. Measures of electronic health record use in outpatient settings across vendors. J Am Med Inform Assoc 2021; 28 (05) 955-959
  • 14 Overhage JM, McCallie Jr D. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (03) 169-174
  • 15 Holmgren AJ, Downing NL, Bates DW. et al. Assessment of electronic health record use between US and non-US health systems. JAMA Intern Med 2021; 181 (02) 251-259
  • 16 Moy AJ, Aaron L, Cato KD. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: using time-motion data to understand documentation burden. Appl Clin Inform 2021; 12 (05) 1002-1013
  • 17 Chen Y, Adler-Milstein J, Sinsky CA. Measuring and maximizing undivided attention in the context of electronic health records. Appl Clin Inform 2022; 13 (04) 774-777
  • 18 Senathirajah Y, Kaufman DR, Cato KD, Borycki EM, Fawcett JA, Kushniruk AW. Characterizing and visualizing display and task fragmentation in the electronic health record: mixed methods design. JMIR Human Factors 2020; 7 (04) e18484

Address for correspondence

Deborah R. Levy, MD, MPH
PRIME Center, VA Connecticut Healthcare System
950 Campbell Avenue, West Haven, CT 06516
United States   

Publication History

Received: 19 July 2023

Accepted: 30 September 2023

Accepted Manuscript online:
06 October 2023

Article published online:
29 November 2023

© 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Sinsky CA, Rule A, Cohen G. et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (04) 639-643
  • 2 Rule A, Melnick ER, Apathy NC. Using event logs to observe interactions with electronic health records: an updated scoping review shows increasing use of vendor-derived measures. J Am Med Inform Assoc 2022; 30 (01) 144-154
  • 3 Levy DR, Sloss EASA, Chartash D. et al. Reflections on the Documentation Burden Reduction AMIA Plenary Session through the Lens of 25 × 5. Appl Clin Inform 2023; 14 (01) 11-15
  • 4 Melnick ER, Ong SY, Fong A. et al. Characterizing physician EHR use with vendor derived data: a feasibility study and cross-sectional analysis. J Am Med Inform Assoc 2021; 28 (07) 1383-1392
  • 5 Melnick ER, Sinsky CA, Krumholz HM. Implementing measurement science for electronic health record use. JAMA 2021; 325 (21) 2149-2150
  • 6 Kannampallil T, Adler-Milstein J. Using electronic health record audit log data for research: insights from early efforts. J Am Med Inform Assoc 2022; 30 (01) 167-171
  • 7 Tran B, Lenhart A, Ross R, Dorr DA. Burnout and EHR use among academic primary care physicians with varied clinical workloads. AMIA Jt Summits Transl Sci Proc 2019; 2019: 136-144
  • 8 Arndt BG, Micek MA, Rule A, Shafer CM, Baltus JJ, Sinsky CA. Refining vendor-defined measures to accurately quantify EHR workload outside time scheduled with patients. Ann Fam Med 2023; 21 (03) 264-268
  • 9 Melnick ER, Nielson JA, Finnell JT. et al. Delphi consensus on the feasibility of translating the ACEP clinical policies into computerized clinical decision support. Ann Emerg Med 2010; 56 (04) 317-320
  • 10 Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: how to decide its appropriateness. World J Methodol 2021; 11 (04) 116-129
  • 11 Rossetti SC, Rosenbloom S, Levy DR. et al. 25 × 5 symposium drives ongoing efforts to reduce documentation burden on U.S. clinicians: final summary report 2021 . Accessed December 5, 2021 at: https://www.dbmi.columbia.edu/wp-content/uploads/2021/12/25×5-Summary-Report.pdf or https://brand.amia.org/m/dbde97860f393e1/original/25×5-Summary-Report.pdf
  • 12 Hobensack M, Levy DR, Cato K. et al. 25 × 5 Symposium to reduce documentation burden: report-out and call for action. Appl Clin Inform 2022; 13 (02) 439-446
  • 13 Baxter SL, Apathy NC, Cross DA, Sinsky C, Hribar MR. Measures of electronic health record use in outpatient settings across vendors. J Am Med Inform Assoc 2021; 28 (05) 955-959
  • 14 Overhage JM, McCallie Jr D. Physician time spent using the electronic health record during outpatient encounters: a descriptive study. Ann Intern Med 2020; 172 (03) 169-174
  • 15 Holmgren AJ, Downing NL, Bates DW. et al. Assessment of electronic health record use between US and non-US health systems. JAMA Intern Med 2021; 181 (02) 251-259
  • 16 Moy AJ, Aaron L, Cato KD. et al. Characterizing multitasking and workflow fragmentation in electronic health records among emergency department clinicians: using time-motion data to understand documentation burden. Appl Clin Inform 2021; 12 (05) 1002-1013
  • 17 Chen Y, Adler-Milstein J, Sinsky CA. Measuring and maximizing undivided attention in the context of electronic health records. Appl Clin Inform 2022; 13 (04) 774-777
  • 18 Senathirajah Y, Kaufman DR, Cato KD, Borycki EM, Fawcett JA, Kushniruk AW. Characterizing and visualizing display and task fragmentation in the electronic health record: mixed methods design. JMIR Human Factors 2020; 7 (04) e18484

Zoom Image
Fig. 1 Workshop consensus process map. Steps in the design and analysis of the workshop series, including four working groups. EHR, electronic health record.