Subscribe to RSS
DOI: 10.1055/s-0041-1731679
Using Log Data to Measure Provider EHR Activity at a Cancer Center during Rapid Telemedicine Deployment
- Abstract
- Background and Significance
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple Choice Questions
- References
Abstract
Objectives Accurate metrics of provider activity within the electronic health record (EHR) are critical to understand workflow efficiency and target optimization initiatives. We utilized newly described, log-based core metrics at a tertiary cancer center during rapid escalation of telemedicine secondary to initial coronavirus disease-2019 (COVID-19) peak onset of social distancing restrictions at our medical center (COVID-19 peak). These metrics evaluate the impact on total EHR time, work outside of work, time on documentation, time on prescriptions, inbox time, teamwork for orders, and undivided attention patients receive during an encounter. Our study aims were to evaluate feasibility of implementing these metrics as an efficient tool to optimize provider workflow and to track impact on workflow to various provider groups, including physicians, advanced practice providers (APPs), and different medical divisions, during times of significant policy change in the treatment landscape.
Methods Data compilation and analysis was retrospectively performed in Tableau utilizing user and schedule data obtained from Cerner Millennium PowerChart and our internal scheduling software. We analyzed three distinct time periods: the 3 months prior to the initial COVID-19 peak, the 3 months during peak, and 3 months immediately post-peak.
Results Application of early COVID-19 restrictions led to a significant increase of telemedicine encounters from baseline <1% up to 29.2% of all patient encounters. During initial peak period, there was a significant increase in total EHR time, work outside of work, time on documentation, and inbox time for providers. Overall APPs spent significantly more time in the EHR compared with physicians. All of the metrics returned to near baseline after the initial COVID-19 peak in our area.
Conclusion Our analysis showed that implementation of these core metrics is both feasible and can provide an accurate representation of provider EHR workflow adjustments during periods of change, while providing a basis for cross-vendor and cross-institutional analysis.
#
Background and Significance
When evaluating provider efficiency within the electronic health record (EHR), clinical informaticists are faced with numerous barriers. A significant and well-documented barrier stems from the inherent diversity in medical practice patterns throughout the United States.[1] [2] [3] Ranging from small rural clinics up to large multiregional health centers, it is often difficult to truly identify a peer medical practice to use as benchmarks and set goals for optimization, even within the same organization.[4] [5] Added to this is the complexity of the hundreds of Office of the National Coordinator for Health Information Technology (ONC)-certified health information technology developers providing diverse EHR experiences that usually provide only vendor-specific metrics for informaticists' analysis and use in optimization efforts.[6] [7] This can be particularly true in larger subspecialty health system settings and in areas with limited peer groups within the vendor-supplied metrics.[8] Several efforts have been undertaken in recent years to standardize metrics in order for them to be vendor-neutral with broad-sweeping implications for research data.[9] [10]
The H. Lee Moffitt Cancer Center and Research Institute is an NCI-Designated Cancer Center with clinical services including one primary campus, providing both ambulatory and inpatient services, as well as two satellite locations providing ambulatory, infusion, and surgical oncology. The Clinical Informatics Department provides operational clinical informatics support to all clinical sites and helps manage the operational informatics needs of over 550 physicians and advanced practice providers (APPs). The primary EHR vendor is Cerner Millennium PowerChart and end-user workflow optimization has relied on analytics provided from the vendor through their proprietary platforms. These vendor-based analytics are useful for internal reference and comparison of pre- and postimplementation workflow data, but are restricted to the time analyses as set by the vendor and often do not match the same workflow intervals that the organization is interested in tracking or optimizing. At best many of the available metrics are used as proxy for what stakeholders really want to measure. In an effort to utilize more targeted, practice-based, and standardized analytics, we created an analytics tool applying the seven core metrics proposed by Sinsky et al which included a collaboration of researchers and experts in working with EHR log data.[9] These metrics are outlined in [Table 1] as described in the original publication and seek to give a true picture of time spent in the EHR based on analyzed log data. The collaboration proposed these metrics to ultimately improve the patient experience through achieving insight into the practice environment, effectiveness of teams, and the influence of policies and regulations on physician workflows.[9]
Measure |
Abbreviation |
Definition and example |
---|---|---|
Total EHR time |
EHR-Time8 |
Total time on EHR (during and outside of clinic sessions) per 8 hours of patient scheduled time |
Work outside of work |
WOW8 |
Time on EHR outside of scheduled patient hours per 8 hours of patient scheduled time |
Time on encounter note documentation |
Note-Time8 |
Hours on documentation (note writing) per 8 hours of scheduled patient time |
Time on prescriptions |
Script-Time8 |
Total time on prescriptions per 8 hours of patient scheduled time |
Time on inbox |
IB-Time8 |
Total time on inbox per 8 hours of patient scheduled time |
Teamwork for orders |
TWORD |
The percentage of orders with team contribution |
Undivided attention |
ATTN |
The amount of undivided attention patients receive from their physician. It is approximated by [(total time per session) minus (EHR time per session)]/total time per session |
While our initial study aim was to validate these novel core measures as an efficient tool for our clinical informaticists to optimize provider workflow, our study evolved to first assess the feasibility of implementing these metrics, with a secondary goal to utilize these metrics to track impact on workflow during times of significant policy change in the treatment landscape. One of the most significant drivers of workflow policy change since the inception of the EHR has been the coronavirus disease-2019 (COVID-19) pandemic.[11] [12] [13] Practices and regulatory bodies have required rapid change to allow for escalation in telemedicine to meet the needs of social distancing and protecting at-risk populations.[14] [15] Hospital systems around the world were charged with rapidly adapting to these challenges to help control the spread of COVID-19 by limiting unnecessary in-person patient encounters, thus shifting from traditional patient care workflows to this novel format.[16] [17] Within this aim we ensured that we were able to sort and analyze if the metrics highlighted any differences in impact felt between various provider groups, including physicians versus APPs and a comparison of all divisions within our hospital system.
As a hospital system charged with caring for patients with cancer, Moffitt Cancer Center has already taken significant precautions regarding infectious diseases as so many of our patients are immunocompromised. COVID-19 further elevated these concerns for our patient population due to the unknown, potentially significant complications from COVID-19 in cancer patients. To meet this challenge, our institution implemented rapid escalation of existing telemedicine services to keep our patients safe while continuing to ensure that they received the timely healthcare that they needed. Utilizing our existing patient schedules, in-person visits were converted to telemedicine video visits as needed to accommodate via the Zoom platform only while maintaining identical appointment durations.[18] As with centers across the country, we saw a significant increase in these visits in a very short period of time.[19] [20] In the span of 4 weeks since implementation, we saw an increase in volume of telemedicine encounters of over 5,000%.[21] While this was an extremely rapid change in our providers' workflows, it presented an opportunity to evaluate the impact on shifting to increased telemedicine utilization through the scope of the core standardized metrics proposed by Sinsky et al.[9]
#
Methods
Specifications for creating the seven metrics and scores were gathered from the Sinsky et al publication and were harmonized with data points from our EHR by our vendor-based data analyst. User and schedule data were obtained from Millennium PowerChart and Moffitt Cancer Center's internal scheduling software, respectively. Data compilation and analysis was designed to be performed by a single data analyst utilizing Tableau (version 2020.3.1).[22] Work effort for creation of analytic formulas, initial data analysis, and ongoing maintenance model was tracked for total time investment and resource utilization. We defined the time point at which full implementation of significant social distancing restrictions occurred at our medical centers as the “COVID-19 peak.” To fully evaluate the impact of these COVID-19-induced restrictions on our providers' EHR efficiency, we analyzed three distinct time periods: the 3 months prior to the COVID-19 peak period (December 2019–February 2020), the 3 months during peak COVID-19 impact (March 2020–May 2020), and the 3 months immediately post-peak recovery and adjustment for providers (June 2020–August 2020). Our analysis was further delineated by breakdown of physicians versus APPs, comparison of all divisions within our hospital system, as well as telemedicine virtual visit versus in-person patient encounters. The analysis contained both ambulatory and inpatient encounters to capture all provider activity within the organization. Although many of our providers have both inpatient and outpatient care responsibilities, our institution implemented telemedicine only in the ambulatory setting. All measures are expressed in a score which has no units and can be interpreted as a lower score associating with better efficiency for a provider. The exception to this is the undivided attention (ATTN) metric, which can be interpreted as the percentage of time that a provider is giving the patient their ATTN, and therefore a higher score would imply a better patient experience. In the original paper by Sinsky et al, this ATTN metric was noted to be aspirational due to the difficulty in interpreting if the difference in total time in the EHR versus the total time of the session may not accurately portray the ATTN of a provider during an encounter.
#
Results
Overall work effort for the project included 120 hours of data analysis and stayed within the bounds of the budgeted allocation of resources for the project. Initial creation of formulas to analyze the data and produce the scores took >90% of time effort, while adjustments to the final analysis process and development of an ongoing maintenance model were much more efficient. The analysis reflects significant workflow changes during the time of the COVID-19 peak in our patient care patterns, with in-person appointment proportions decreasing from nearly 100 to 70.8% of all encounters and telemedicine appointment types increasing from <1 to 29.2% ([Fig. 1]). This shift was amplified in those clinical divisions that could rapidly accommodate virtual visits in their workflow, or who part of the early implementation group could rapidly “scale up” their utilization, such as Endocrine Tumor, Supportive Care, and Survivorship. Many divisions depended on continued in-person visits to provide the appropriate level of care needed. Following the peak, we continued to have an elevated proportion of telemedicine visits, compared with the prior baseline, with a new established baseline average of approximately 11%.
Results obtained for total EHR time (EHR-Time8), work outside of work (WOW8), time on documentation (Note-Time8), time on prescriptions (Script-Time8), inbox time (IB-Time8), teamwork for orders (TWORD), and ATTN patients receive during an encounter were analyzed for the three period of pre-, during, and post-COVID 19 peak. Barriers were encountered while attempting to assess two measures: Script-Time8 and TWORD, due to limitations in our vendor-generated analytics. The scores were further delineated to compare physician versus APP workflow differences, as well as comparisons of various provider groups. The average scores for all analyzed measures for these groups are highlighted in [Table 2]. Upon analysis we noted differences in overall provider efficiency during the peak of COVID-19-induced restrictions in our area. Significant increases from average baseline scores were observed in EHR-Time8 (8.03–10, p < 0.001), WOW8 (6.53–8.3, p < 0.001), Note-Time8 (3.27–3.8, p < 0.01), and IB-Time8 (0.3–0.6, p < 0.01). Most notably providers had a 25% increase in overall time spent in the EHR and a 27% increase in the amount of time spent on WOW8. The ATTN score was unaffected during the time periods analyzed. All of the metrics returned to baseline after the initial COVID-19 peak in our area and a summary of these shifts for all providers is found in [Fig. 2].
Abbreviations: APP, advanced practice provider; ATTN, undivided attention; BMT, bone marrow transplant; HER, electronic health record; IB-Time8, inbox time; WOW, work outside of work.
While analyzing overall comparison between physician and APP workflows, we detected significant differences in all five core metrics between the two groups. Physicians and APPs had significant differences in every category (p < 0.002), with APPs having higher scores in all areas except ATTN, and the largest differences noted in EHR-Time8, WOW8, and Note-Time8 ([Fig. 3]). Further analysis revealed that these differences were consistent in the three analyzed time periods and the deviations in scores noted during the peak COVID-19 period were observed equally in physicians and APPs throughout the organization.
#
Discussion
There are numerous ways to enhance the patient experience in health care, with one of the most significant being improving the efficiency in which clinicians, including providers, use the EHR. The analysis of EHR log data has been identified as an expanding utility to further understand provider efficiency and allow the field of clinical informatics to easily analyze these data and provide optimization efforts to providers directly.[9] [10] Enhanced provider efficiency has been associated with both improved patient safety and decreased physician burnout.[3] [23] To accomplish these goals of enhanced patient safety and provider efficiency, with the rapidly expanding certified EHR market, there is a critical need to have standardized provider efficiency metrics that can be universally implemented with all EHRs and care sites. In our analysis we have been able to apply five of the seven core metrics described by Sinsky et al in relatively rapid fashion with minimal resources. In previous evaluations, vendor-supplied metrics were the only ones that were easily accessible, and these often compare data to anonymous baseline groups or defined metrics outside the scope of practice at our facility. A key example of this was that the provider was considered working “after hours” based on a hard stop of work at 5:00 p.m. Being able to better classify provider work efficiency as well as standardize metrics allows for larger cross-vendor and cross-institutional studies using provider workflow analysis.
Our data highlighted that during a time of rapid telemedicine expansion our providers' overall time in the EHR and hours spent after work were greatly increased. As our health care system adjusted to the workflow shifts, however, there was normalization of these metrics back to near-baseline, underscoring the adaptability of providers faced with large increases in telemedicine utilization as the new normal. During the rapid expansion of telemedicine visits, the clinical informatics teams provided enhanced, incremental, and at the elbow support to our clinicians to guide them through this process as it became a larger component of their workload. Through expedited governance review discussions and reprioritization of efforts, Clinical Informatics staff members were able to have less critical efforts deprioritized so that focus on supporting providers during this time was their priority. This included enhanced resources available for our provider direct Clinical Informatics support telephone line, as well as more available resources to troubleshoot and rapidly validate break-fix solutions when problems arose due to the new workflows. Similar adjustments to governance to match accelerated response teams have been shown to be effective at facilities adapting to the COVID-19 pandemic.[24] [25] While not a direct goal of this study, this observation of metric normalization post major workflow transitions brings light to the fact that providers can return to their benchmarked efficiencies with appropriate clinical informatics support after a period of recovery.
These metrics were additionally able to highlight the disparate EHR utilization by physicians and APPs at our institution. Our institution has a robust and highly qualified cohort of specialized APPs whose skills are widely utilized in the care of our patients. Although small or subtle differences in scores between these groups could be explained by training or efficiency of using the EHR, the differences in nearly every category observed from these two groups is quite large, highlighted by a EHR-Time8 score being six times higher for APPs than that for physicians. These data have helped quantify a true gap in EHR use burden when working in a setting where an attending is staffing a patient with the APP. Previous studies have shown similar trends in increased time in the EHR for APPs compared with physicians.[26] Increased total amount of time spent within an EHR has been shown to be directly related to increased clinician burnout,[3] and not surprisingly this concept applies to APP burnout as well.[27] While both groups in our study had similar changes during the three observation periods, utilizing these data to help understand these different workflows between physicians and APPs can help target specific gaps in workflow to address for future optimizations. These efforts can range from ensuring APPs as key stakeholder in design optimizations as well as ensure robust Clinical Informatics support of APPs to regularly analyze their use of the EHR and provide direct at the elbow follow-up to guide more efficient use of the system.
The implementation of these metrics has allowed for an ongoing review of all provider workflow as we continue to work through this health crisis and the ever-changing digital health care landscape. As with many analytics and optimization efforts in clinical informatics, resource constraints can be a significant barrier to implementation. Our institution was able to create these metrics with the support of a vendor-based analyst and approximately 120 hours of work. The significant burden of time investment was at the initiation of the project and creation of formulas to calculate the scores. Following the creation of the formulas, being able to capture and analyze the data on an ongoing basis required minimal effort. Despite the initial time burden to implement the analytics tools, the ease of future utilization of the tool made its implementation not only feasible, but also exceedingly valuable to our institution. These reports can now be reviewed by informaticists and allow for improved ongoing workflow analysis without the need for resource expansion. Establishment of these metrics at our institution has also led to broad-reaching future research implications with easily obtained and analyzed data that can be compared between institutions and vendors in the future.
Our analysis was limited by not being able to collect the data needed to implement all seven metrics due to our current system setup. Limitations regarding Script-Time8 arose from inability to accurately define time triggers for prescriptions. In our EHR, time spent on different provider activities is delineated by Response Time Measurement System (RTMS) timers. Capturing these distinct time periods allows for separation of time spent on activities such as documentation and chart review. Our EHR is limited in that no RTMS timer trigger exists to capture and delineate time spent on prescriptions, making the calculation of this metric impossible until an improved RTMS timer trigger can be designed in the future. Regarding TWORD, our system encountered difficulties related to the calculation of computerized provider order entry (CPOE) percentage per user. While it is able to capture the order action, and whether or not a co-signature is required, the system is not able to capture single CPOE orders that have been placed in a “future” status previously signed by a provider, nor does it take into consideration orders from more complex order-sets the providers previously signed. As these order types are major parts of the standard workflow at our institution, an accurate calculation of the metric could not be performed in the scope of the project. These were barriers that may be overcome in future iterations of the project as the EHR analytics evolve, but they could be observed at other institutions with similar workflow and RTMS limitations. There are also limitations that may be specific to being a large oncology center where providers may be seeing patients in both an ambulatory and an inpatient setting on the same day. This can potentially skew some of the scores, such as the higher total time in the EHR scores we observed in some groups such as bone marrow transplant and Infectious Disease. We recognize that there are limitations to the interpretation of our data, including those imbued by the nature of clinical workflows. This can be seen with the complexity of assessing true scheduling data when physicians and APPs can and do see patients not on their assigned schedule in the EHR. Our study was also conducted at a tertiary cancer center with large ambulatory volumes which leant itself well to analyze the effect of rapidly expanding virtual health visits. Due to the COVID-19 peak, social distancing restrictions, this study was not able to be conducted alongside an in-person validation method, such as a time-motion study. Despite these barriers, we feel that the remaining five metrics were able to provide us with key insight into current state, as well as changes that take place during significant workflow shifts. While this study showed the feasibility of implementing these new metrics, future studies could incorporate in-person validation, as well as direct comparison of standard vendor-supplied metrics, to more robustly analyze new core metrics value in workflow analysis.
#
Conclusion
This analysis helps further illustrate that implementation of these novel core measures is feasible and has the potential to provide a more accurate representation of provider EHR workflow issues that may arise during and after implementations or other workflow-altering events. Barriers were identified in fully incorporating all seven measures which will need to be addressed not only with our EHR vendor-generated data, but also in other EHRs as well, to help validate the broad-reaching implications of these metrics. Further multi-institutional implementation of these metrics will help evaluate these issues to further substantiate these core measures as a new potential gold standard in EHR provider workflow analytics and could lead to advances in data analysis and research in the future.
#
Clinical Relevance Statement
Accurate metrics of provider activity within the EHR are critical to understand workflow efficiency and target optimization initiatives. Implementing these novel log-based metrics can help provide a more accurate and objective view of provider EHR activity and assist to identify workflow deficiencies and target optimization targets. These metrics can be especially helpful to understand the impact of the shifting landscape to increased telemedicine utilization on provider EHR efficiency.
#
Multiple Choice Questions
-
What was one of the most significant barriers that were encountered while attempting to implement these core log-based metrics?
-
Inability to delineate between provider type
-
Difficulty having specific RTMS data needed
-
Limitations in analytics software to calculate the scores
-
Provider reluctance to participate in the data capture
Correct Answer: The correct answer is option b. Difficulty having specific RTMS data needed. Explanation: Not all of the core metrics were analyzable during this study. Time spent on prescriptions (Script-Time) was difficult to delineate within the data due to having no absolute RTMS time trigger in the available log data that could be identified to calculate the metrics. This can be a common barrier when trying to accurately track time spent on specific tasks if there are not discreet markers of the start and end of that task.
-
-
What conclusion can be made regarding the metrics comparing physicians and advanced practice providers (APPs) in this study?
-
APPs overall spent less time in the EHR compared with physicians
-
APPs overall spent less time on documentation compared with physicians
-
Physicians spent less total time in the EHR compared with APPs
-
Physicians spent more time on documentation compared with APPs
Correct Answer: The correct answer is option c. Physicians spent less total time in the EHR compared with APPs. Explanation: Analyzing metrics based on log data can highlight and quantify significant workflow gaps between groups of providers. In this study although both physicians and APPs were affected by the changes in workflow due to COVID-19, the overall burden of EHR use was not distributed equally between these groups. Analyzing log data between various provider groups can help target specific gaps in workflow to address for future optimizations including ensuring the most burdened groups as key stakeholders in design optimizations.
-
#
#
Conflict of Interest
None declared.
Protection of Human and Animal Subjects
Human and/or animal subjects were not included in the project.
-
References
- 1 Duncan BJ, Zheng L, Furniss SK. et al. In search of vital signs: a comparative study of EHR documentation. AMIA Annu Symp Proc 2018; 2018: 1233-1242
- 2 Murphy DR, Giardina TD, Satterly T, Sittig DF, Singh H. An exploration of barriers, facilitators, and suggestions for improving electronic health record inbox-related usability: a qualitative analysis. JAMA Netw Open 2019; 2 (10) e1912638
- 3 Furlow B. Information overload and unsustainable workloads in the era of electronic health records. Lancet Respir Med 2020; 8 (03) 243-244
- 4 Grando A, Manataki A, Furniss SK. et al. Multi-method study of electronic health records workflows. AMIA Annu Symp Proc 2018; 2018: 498-507
- 5 Huang MZ, Gibson CJ, Terry AL. Measuring electronic health record use in primary care: a scoping review. Appl Clin Inform 2018; 9 (01) 15-33
- 6 Office of the National Coordinator for Health Information Technology. Certified health IT developers and editions reported by health care professionals participating in the Medicare EHR Incentive Program. Published 2017. Accessed October 5, 2020 at: http://dashboard.healthit.gov/quickstats/pages/FIG-Vendors-of-EHRs-to-Participating-Professionals.php
- 7 Nair S, Hsu D, Celi LA. Challenges and opportunities in secondary analyses of electronic health record data. In: MIT Critical Data. Secondary Analysis of Electronic Health Records Cham: Springer; 2016: 17-26
- 8 D'Amore J, Bouhaddou O, Mitchell S. et al. Interoperability progress and remaining data quality barriers of certified health information technologies. AMIA Annu Symp Proc 2018; 2018: 358-367
- 9 Sinsky CA, Rule A, Cohen G. et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (04) 639-643
- 10 DiAngi YT, Lee TC, Sinsky CA, Bohman BD, Sharp CD. Novel metrics for improving professional fulfillment. Ann Intern Med 2017; 167 (10) 740-741
- 11 Rao G, Singh A, Gandhotra P. et al. Paradigm shifts in cardiac care: lessons learned from COVID-19 at a large New York health system. Curr Probl Cardiol 2021; 46 (03) 100675
- 12 Weinberg MS, Patrick RE, Schwab NA. et al. Clinical trials and tribulations in the COVID-19 era. Am J Geriatr Psychiatry 2020; 28 (09) 913-920
- 13 Hron JD, Parsons CR, Williams LA, Harper MB, Bourgeois FC. Rapid implementation of an inpatient telehealth program during the COVID-19 pandemic. Appl Clin Inform 2020; 11 (03) 452-459
- 14 Telehealth.HHS.Gov. Policy changes during the COVID-19 Public Health Emergency. Health Resources and Services Administration. Published 2020. Accessed October 5, 2020 at: https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency/
- 15 Altman RL, Anstett T, Simpson JR, Del Pino-Jones A, Lin CT, Pell J. Ambulatory clinician's guide to inpatient service: an innovative rapid onboarding strategy for the COVID-19 pandemic. Appl Clin Inform 2020; 11 (05) 802-806
- 16 Bokolo AJ. Application of telemedicine and eHealth technology for clinical services in response to COVID-19 pandemic. Health Technol (Berl) 2021; DOI: 10.1007/s12553-020-00516-4.
- 17 Monaghesh E, Hajizadeh A. The role of telehealth during COVID-19 outbreak: a systematic review based on current evidence. BMC Public Health 2020; 20 (01) 1193
- 18 Zoom Video Conferencing Platform [computer program]. 2020
- 19 Mehrotra A, Ray K, Brockmeyer DM, Barnett ML, Bender JA. Rapidly converting to “virtual practices”: outpatient care in the era of Covid-19. NEJM Catal 2020; DOI: 10.1056/CAT.20.0091.
- 20 Mann DM, Chen J, Chunara R, Testa PA, Nov O. COVID-19 transforms health care through telemedicine: evidence from the field. J Am Med Inform Assoc 2020; 27 (07) 1132-1135
- 21 Drees J. Moffitt Cancer Center's virtual visits up 5,000% in response to COVID-19. Published 2020. Accessed October 5, 2020 at: https://www.beckershospitalreview.com/telehealth/moffitt-cancer-center-s-virtual-visits-up-5-000-in-response-to-covid-19.html
- 22 Tableau. Version 2020. 3.1. Accessed October 5, 2020 at: https://www.tableau.com/support/releases/desktop/2020.3.1.
- 23 Middleton B, Bloomrosen M, Dente MA. et al; American Medical Informatics Association. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20 (e1): e2-e8
- 24 Grange ES, Neil EJ, Stoffel M. et al. Responding to COVID-19: The UW Medicine Information Technology Services experience. Appl Clin Inform 2020; 11 (02) 265-275
- 25 Knighton AJ, Ranade-Kharkar P, Brunisholz KD. et al. Rapid implementation of a complex, multimodal technology response to COVID-19 at an integrated community-based health care system. Appl Clin Inform 2020; 11 (05) 825-838
- 26 McPeek-Hinz E, Boazak M, Sexton JB. et al. Clinician burnout associated with sex, clinician type, work culture, and use of electronic health records. JAMA Netw Open 2021; 4 (04) e215686
- 27 Micek MA, Arndt B, Tuan W-J. et al. Physician burnout and timing of electronic health record use. ACI Open 2020; 04 (01) e1-e8
Address for correspondence
Publication History
Received: 25 January 2021
Accepted: 29 May 2021
Article published online:
14 July 2021
© 2021. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Duncan BJ, Zheng L, Furniss SK. et al. In search of vital signs: a comparative study of EHR documentation. AMIA Annu Symp Proc 2018; 2018: 1233-1242
- 2 Murphy DR, Giardina TD, Satterly T, Sittig DF, Singh H. An exploration of barriers, facilitators, and suggestions for improving electronic health record inbox-related usability: a qualitative analysis. JAMA Netw Open 2019; 2 (10) e1912638
- 3 Furlow B. Information overload and unsustainable workloads in the era of electronic health records. Lancet Respir Med 2020; 8 (03) 243-244
- 4 Grando A, Manataki A, Furniss SK. et al. Multi-method study of electronic health records workflows. AMIA Annu Symp Proc 2018; 2018: 498-507
- 5 Huang MZ, Gibson CJ, Terry AL. Measuring electronic health record use in primary care: a scoping review. Appl Clin Inform 2018; 9 (01) 15-33
- 6 Office of the National Coordinator for Health Information Technology. Certified health IT developers and editions reported by health care professionals participating in the Medicare EHR Incentive Program. Published 2017. Accessed October 5, 2020 at: http://dashboard.healthit.gov/quickstats/pages/FIG-Vendors-of-EHRs-to-Participating-Professionals.php
- 7 Nair S, Hsu D, Celi LA. Challenges and opportunities in secondary analyses of electronic health record data. In: MIT Critical Data. Secondary Analysis of Electronic Health Records Cham: Springer; 2016: 17-26
- 8 D'Amore J, Bouhaddou O, Mitchell S. et al. Interoperability progress and remaining data quality barriers of certified health information technologies. AMIA Annu Symp Proc 2018; 2018: 358-367
- 9 Sinsky CA, Rule A, Cohen G. et al. Metrics for assessing physician activity using electronic health record log data. J Am Med Inform Assoc 2020; 27 (04) 639-643
- 10 DiAngi YT, Lee TC, Sinsky CA, Bohman BD, Sharp CD. Novel metrics for improving professional fulfillment. Ann Intern Med 2017; 167 (10) 740-741
- 11 Rao G, Singh A, Gandhotra P. et al. Paradigm shifts in cardiac care: lessons learned from COVID-19 at a large New York health system. Curr Probl Cardiol 2021; 46 (03) 100675
- 12 Weinberg MS, Patrick RE, Schwab NA. et al. Clinical trials and tribulations in the COVID-19 era. Am J Geriatr Psychiatry 2020; 28 (09) 913-920
- 13 Hron JD, Parsons CR, Williams LA, Harper MB, Bourgeois FC. Rapid implementation of an inpatient telehealth program during the COVID-19 pandemic. Appl Clin Inform 2020; 11 (03) 452-459
- 14 Telehealth.HHS.Gov. Policy changes during the COVID-19 Public Health Emergency. Health Resources and Services Administration. Published 2020. Accessed October 5, 2020 at: https://telehealth.hhs.gov/providers/policy-changes-during-the-covid-19-public-health-emergency/
- 15 Altman RL, Anstett T, Simpson JR, Del Pino-Jones A, Lin CT, Pell J. Ambulatory clinician's guide to inpatient service: an innovative rapid onboarding strategy for the COVID-19 pandemic. Appl Clin Inform 2020; 11 (05) 802-806
- 16 Bokolo AJ. Application of telemedicine and eHealth technology for clinical services in response to COVID-19 pandemic. Health Technol (Berl) 2021; DOI: 10.1007/s12553-020-00516-4.
- 17 Monaghesh E, Hajizadeh A. The role of telehealth during COVID-19 outbreak: a systematic review based on current evidence. BMC Public Health 2020; 20 (01) 1193
- 18 Zoom Video Conferencing Platform [computer program]. 2020
- 19 Mehrotra A, Ray K, Brockmeyer DM, Barnett ML, Bender JA. Rapidly converting to “virtual practices”: outpatient care in the era of Covid-19. NEJM Catal 2020; DOI: 10.1056/CAT.20.0091.
- 20 Mann DM, Chen J, Chunara R, Testa PA, Nov O. COVID-19 transforms health care through telemedicine: evidence from the field. J Am Med Inform Assoc 2020; 27 (07) 1132-1135
- 21 Drees J. Moffitt Cancer Center's virtual visits up 5,000% in response to COVID-19. Published 2020. Accessed October 5, 2020 at: https://www.beckershospitalreview.com/telehealth/moffitt-cancer-center-s-virtual-visits-up-5-000-in-response-to-covid-19.html
- 22 Tableau. Version 2020. 3.1. Accessed October 5, 2020 at: https://www.tableau.com/support/releases/desktop/2020.3.1.
- 23 Middleton B, Bloomrosen M, Dente MA. et al; American Medical Informatics Association. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20 (e1): e2-e8
- 24 Grange ES, Neil EJ, Stoffel M. et al. Responding to COVID-19: The UW Medicine Information Technology Services experience. Appl Clin Inform 2020; 11 (02) 265-275
- 25 Knighton AJ, Ranade-Kharkar P, Brunisholz KD. et al. Rapid implementation of a complex, multimodal technology response to COVID-19 at an integrated community-based health care system. Appl Clin Inform 2020; 11 (05) 825-838
- 26 McPeek-Hinz E, Boazak M, Sexton JB. et al. Clinician burnout associated with sex, clinician type, work culture, and use of electronic health records. JAMA Netw Open 2021; 4 (04) e215686
- 27 Micek MA, Arndt B, Tuan W-J. et al. Physician burnout and timing of electronic health record use. ACI Open 2020; 04 (01) e1-e8