Subscribe to RSS
DOI: 10.1055/s-0042-1744388
Design, Usability, and Acceptability of a Needs-Based, Automated Dashboard to Provide Individualized Patient-Care Data to Pediatric Residents
- Abstract
- Background and Significance
- Objectives
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple Choice Questions
- References
Abstract
Background and Objectives Pediatric residency programs are required by the Accreditation Council for Graduate Medical Education to provide residents with patient-care and quality metrics to facilitate self-identification of knowledge gaps to prioritize improvement efforts. Trainees are interested in receiving this data, but this is a largely unmet need. Our objectives were to (1) design and implement an automated dashboard providing individualized data to residents, and (2) examine the usability and acceptability of the dashboard among pediatric residents.
Methods We developed a dashboard containing individualized patient-care data for pediatric residents with emphasis on needs identified by residents and residency leadership. To build the dashboard, we created a connection from a clinical data warehouse to data visualization software. We allocated patients to residents based on note authorship and created individualized reports with masked identities that preserved anonymity. After development, we conducted usability and acceptability testing with 11 resident users utilizing a mixed-methods approach. We conducted interviews and anonymous surveys which evaluated technical features of the application, ease of use, as well as users' attitudes toward using the dashboard. Categories and subcategories from usability interviews were identified using a content analysis approach.
Results Our dashboard provides individualized metrics including diagnosis exposure counts, procedure counts, efficiency metrics, and quality metrics. In content analysis of the usability testing interviews, the most frequently mentioned use of the dashboard was to aid a resident's self-directed learning. Residents had few concerns about the dashboard overall. Surveyed residents found the dashboard easy to use and expressed intention to use the dashboard in the future.
Conclusion Automated dashboards may be a solution to the current challenge of providing trainees with individualized patient-care data. Our usability testing revealed that residents found our dashboard to be useful and that they intended to use this tool to facilitate development of self-directed learning plans.
#
Keywords
data visualization - interface and usability - dashboard - testing and evaluation - graduate medical education - quality improvementBackground and Significance
The Accreditation Council for Graduate Medical Education (ACGME) requires programs to provide residents with patient-care and quality metrics to self-reflect and identify areas for improvement.[1] [2] [3] The inclusion of competency milestones that emphasize iterative improvement is reflective of an increasing emphasis on objective quality metrics by health care organizations worldwide. Increasingly, health care institutions are using quality dashboards which allow providers to track their performance on key quality metrics.[4] These types of dashboards have been shown to improve adherence to quality guidelines and patient outcomes.[4]
Previous work has shown that trainees are interested in receiving patient-care data in the form of individualized case logs and other rotation-specific quality metrics.[5] [6] Despite accrediting body requirements, increasing prevalence of institutional quality dashboards, and trainee desire for personalized performance data, only a few studies exist among procedural and radiological specialties which discuss dashboard development for automated case-logging and tracking.[6] [7] [8] [9] Even fewer studies describe the creation of dashboards that provide quality metrics for trainees.[7] [8] While there are two studies about the use of automated case logs in pediatrics (one for aggregate pediatric residency data and the other for pediatric emergency medicine fellows),[9] [10] to our knowledge there are no studies or descriptions of a dashboard that provides individualized, rotation-specific automated case logs and quality metrics to pediatric residents.
#
Objectives
We aimed to (1) design and implement a real-time automated dashboard providing meaningful individualized patient-care data to pediatric residents, and (2) examine the usability and acceptability of the dashboard among pediatric residents.
#
Methods
Study Design
This was a mixed methods study of an educational innovation conducted at a pediatric tertiary care center from February 2020 to April 2021. The educational innovation consisted of the development of a real-time automated dashboard containing individualized patient-care data. After design and development of the dashboard, we conducted preliminary validation followed by formal usability and acceptability testing with resident users. Our institutional review board reviewed and approved this study.
#
Study Setting and Participants
The newly developed dashboard provides residents with patient-care data from their time on the pediatric hospital medicine (PHM) rotation. The PHM inpatient service is a core requirement of pediatric residency training, and typically provides residents with broad general pediatrics exposure to common inpatient diagnoses (e.g., asthma, pneumonia, bronchiolitis, etc.).[11] [12] Pediatric residents at our institution typically complete three 4-week blocks on the PHM service in their intern year (postgraduate year 1 [PGY1]), as well as one supervisory block during their third year of residency (PGY3). Eleven pediatric residents participated in usability and acceptability testing.
#
Dashboard Design and Data Sources
The dashboard design team consisted of a database programmer and four pediatric hospitalists with expertise in dashboard design, quality metrics, the electronic health record (EHR), and one of the pediatric residency program Associate Program Directors. These team members were involved in all parts of the dashboard development, including conceptualization, metric selection, visualization design, and the study of the dashboard after implementation. The project team regularly collaborated with pediatric residency program leadership throughout the development process.
Development of the dashboard was informed by an institutional needs assessment, which consisted of an anonymous, voluntary survey distributed to all residents. The survey elicited resident attitudes toward the currently provided types of feedback and patient-care data and asked residents to review which types of data would be most meaningful to them for engaging in critical self-reflection of their patient-care practices. Dashboard metrics that residents were most interested in included counts of rotation-specific “core-competency” diagnoses (e.g., diagnoses like asthma, bronchiolitis, pneumonia), procedure counts (e.g., counts of procedures performed by that trainee), basic quality metrics (e.g., adherence to guidelines, length of stay, readmission rates), and efficiency metrics (e.g., count of patient encounters per shift).
To build the dashboard, we queried an enterprise data warehouse (Health Catalyst, Salt Lake City, Utah, United States) populated with data from our EHR (Epic, Epic Systems Corporation, Verona, Wisconsin, United States). We created a real-time connection to visual analytics software QlikSense (Qlik Technologies Inc, King of Prussia, Pennsylvania, United States).[13] We allocated patients to trainees based on note authorship. We included any resident who signed the note as a note author, which means that if both the intern and upper-level resident signed a note, then that patient note would be attributed to both of them. Standard practice at our institution is for residents to sign notes only if they have physically examined the patient and have thus been directly involved in their care. Different metrics referred to different note types based upon authors' consensus on the most clinically relevant operational definition of that metric ([Table 1]). For example, readmission rates were only calculated for patients discharged by each individual resident (e.g., patients for whom that resident signed the discharge summary).
Metric |
Description |
Notes used for attribution to resident |
---|---|---|
Antibiotic use in bronchiolitis |
Patients with a diagnosis of bronchiolitis who have an antibiotic order after an admission order is placed |
History and physical note |
Broad spectrum antibiotic use in community-acquired pneumonia |
Distribution of class of antibiotic orders (e.g., penicillins, cephalosporins) after an admission order is placed in patients admitted with uncomplicated community-acquired pneumonia |
History and physical note |
Length of stay |
Length of stay by broad diagnostic categories (e.g., gastroenteritis, sepsis) compared with PHM median for each category |
Discharge summary |
Readmission rate |
30-day readmission rate compared with PHM median |
Discharge summary |
Rapid response by diagnosis |
Rapid response count by broad diagnostic categories (e.g., gastroenteritis, sepsis) |
All notes[a] |
Rapid response by month |
Rapid response count compared with the resident's total count of patient encounters by month |
All notes[a] |
Abbreviation: PHM, pediatric hospital medicine.
Note: This table describes the quality metrics and operational definitions that were displayed on the dashboard for each resident. We also listed which notes were used to attribute these metrics to residents, recognizing that no attribution system is perfect.
a The resident may not have been personally involved with the rapid response but we felt if they had written a note on the patient this represented a significant level of involvement in that patient's care.
To build the core-competency diagnostic counts identified as a priority in our needs assessment, we used core-competency diagnoses derived from a previously published list of PHM Core Competencies which are endorsed by the Society for Hospital Medicine and Academic Pediatric Association.[14] These core competencies are slightly modified to reflect the patient populations cared for by our institution's PHM teams. We created individualized core-competency diagnostic counts for PHM by assigning all relevant International Classification of Diseases, Tenth Revision (ICD-10) codes to each core-competency diagnosis.[15] For example, we identified all ICD-10 codes that referred to pneumonia and mapped them to a “Pneumonia” core competency.
The needs assessment additionally identified that pediatric residents were most interested in metrics where they felt that they had sufficient decision-making responsibility. Quality of care metrics were selected based upon previously proposed PHM-related quality indicators, with careful emphasis placed on metrics where pediatric residents would likely feel a sense of ownership.[16] [17]
When planning this dashboard with residency leadership a key objective identified was to preserve resident anonymity while still providing individualized data to each resident. Without anonymity, residents would be able to view each other's metrics. To prevent this, we created anonymous fictional character names which were linked to each resident provider. For example, resident Jane Doe would be assigned a character name of Frodo Baggins. In the dashboard, all data would appear for Frodo Baggins, but only Jane Doe would know that this represents her patient-care data.
After preliminary design of the dashboard, initial α testing was conducted by the project team and one outside user with expertise in dashboard development. Dashboard data was validated against manual queries from the enterprise data warehouse. Preliminary data validation was conducted by two authors (J.Y. and J.W.) and included sampling charts from 30 randomly selected residents, distributed as 10 residents per year for 3 years. For each resident selected, at least three patient records were reviewed, for a total of approximately 90 patient records reviewed. Additional data validation was conducted by manually reviewing 50 randomly selected patient charts to ensure that notes and procedures were being attributed appropriately to residents and that no notes or procedures were being missed. Errors were identified both in the attribution logic (e.g., a type of discharge summary was not included in our initial query) and in the diagnostic mapping to core-competency counts (e.g., Streptococcus meningitis mapping to community acquired pneumonia). These errors were fixed at the time of identification and then data validation was continued as previously described.
#
Usability and Acceptability Testing
Usability and acceptability testing was conducted with a small group of volunteer resident users with a mixed methods approach. Informed consent was obtained from all participants prior to participation. Think-aloud interviews ([Supplementary Appendix A], available in the online version) and anonymous surveys ([Supplementary Appendix B], available in the online version) were conducted which questioned residents on technical features of the dashboard application, ease of use, as well as users' attitudes and intention toward using the dashboard in the future.
In think-aloud interviews, users were instructed to verbalize their experience as they navigated through the dashboard. One author (J.Y.) continuously monitored the screen throughout the user's session. User's sessions lasted between 30 minutes and 1 hour. The users were assigned three tasks which encompassed three primary functionalities of the dashboard. The tasks included (1) identifying a user's three least frequent core-competency diagnoses, (2) identifying the user's average number of encounters per day worked, and (3) identifying the user's most commonly prescribed antibiotic for patients they admitted with community-acquired pneumonia (e.g., patients for whom they wrote a History and Physical Note). Once the participants had completed these three tasks, they were asked to summarize their attitudes and perceptions of the dashboard application. This portion of the interview used a guide developed by the research team. Questions assessed users' perceptions of usefulness, ease of use, attitude toward using, and intention to use the dashboard. The interviewer prompted further explanation as needed. Interview times ranged between 30 and 45 minutes. All interviews were conducted by the same team member (J.Y.) to ensure standardization of the interview process. All interviews were audio recorded and transcribed verbatim. Interviews were conducted until thematic saturation was achieved during data analysis.
Participants were additionally asked to complete an anonymous survey based on the Technology Acceptance Model as a framework.[18] Participants were emailed a link to the survey after the interview, and informed that participation in the survey was voluntary. This survey consisted of 15 Likert-scaled items assessing the perceived usefulness, ease of use, attitude toward using, and intention to use the dashboard. Questions were very similar to the semistructured interview but provided participants anonymity to minimize bias.
#
Analysis
Categories from usability interviews were identified using a content analysis approach. Two authors (J.Y. and L.H.) independently coded participant responses and subdivided responses into categories and subcategories. Disagreement was rare, but when it occurred, the authors referred to the original transcript to clarify the participants' meaning. After initial categorization, the authors confirmed that the selected quotes were most representative of each category. Commonly mentioned suggestions for improvement were identified, and changes were made to the dashboard design. All survey data were analyzed using Microsoft Excel.[19]
#
#
Results
Tool Description
In response to resident survey results, we developed a dynamic, automated dashboard that provides individualized, resident-specific patient-care and quality metrics. The data in the dashboard is updated with new data nightly. Every visualization in the tool is interactive, so users are able to manipulate visualizations to explore in further detail. For example, if a resident is interested in which specific ICD-10 codes are captured in a particular diagnostic category as shown in [Fig. 1], they can select that category and a tree-map will filter to show which specific diagnoses are included and their relative frequency. Our dashboard is made up of four pages: Core-Competency Counts, Demographics, Quality Metrics, and Productivity and Efficiency Metrics. The Core-Competency Counts ([Fig. 1]) dashboard page provides individualized core-competency diagnosis counts compared with the rolling average for pediatric interns over the last three academic years. The Demographics dashboard page ([Fig. 2]) provides a basic overview of the age, ethnicity, language, and home city and nation for all patients a resident has cared for. The Quality Metrics dashboard page ([Fig. 3]) includes the rate of antibiotic prescriptions in patients admitted with a diagnosis of bronchiolitis (a viral illness where antibiotics are typically not indicated), most frequently prescribed antibiotics in patients admitted for community-acquired pneumonia, frequency of rapid response calls (mechanism for intensive care evaluation and transfer), length of stay by diagnosis, and readmission rates. Finally, the Productivity and Efficiency Metrics dashboard page ([Fig. 4]) includes a resident's total number of patient encounters, unique patient encounters, average patient encounters per day, counts of different note types, and counts of procedure notes. All of these are compared with an average pediatric intern over the last three academic years, as previously described.
#
Usability and Acceptability Testing Results
Eleven resident users were selected to participate based on a first-come, first-served basis. Of these resident users, four were pediatric interns (PGY1), four were second-year residents, and three were third-year residents. Several changes were made based on resident input to improve the ease-of-use and comprehensibility of the dashboard. First, many residents had trouble navigating between pages of the dashboard. To overcome this, we added an emphasis on page navigation in a brief introductory video that is linked from the main page of the dashboard. Many residents wanted more information about how each metric was calculated. To quickly orient them, we added explanatory text descriptions in most visualizations and added a detailed documentation page which contained in-depth explanations of how specific metrics were calculated. Finally, residents repeatedly asked for more peer comparison data on many of the visualizations, so we added this type of comparative data wherever possible (see [Fig. 4] for example).
#
Content Analysis Results
In semistructured interviews, the most frequently mentioned proposed usage of the dashboard was its utility for a resident's self-directed learning ([Table 2]). Specifically, 10 out of 11 residents mentioned that they would like to use the core-competency diagnostic counts to review their current diagnostic exposure and seek out learning opportunities for less frequently encountered diagnoses. The most commonly suggested changes were to add more peer comparison data for the productivity and efficiency metrics and to increase the amount of patient-level data that was provided for quality metrics. Residents overall had few concerns or fears about use or implementation of the dashboard, but 3 of 11 residents mentioned that they felt that certain quality metrics are not reflective of decisions made by the individual resident. For example, length of stay may reflect attendings' decisions regarding discharge timing more than the actions of the individual discharging resident. Finally, residents' preferred setting and frequency of dashboard use did vary slightly. Most residents (7 out of 11) indicated that they would likely refer to the dashboard once or twice per PHM rotation. Similarly, most (8 out of 11) felt comfortable reviewing the dashboard with a residency leader or advisor, and most (9 out of 11) would feel comfortable sharing the dashboard results with peers, upper-level residents, or mentors.
Abbreviations: COVID, coronavirus disease; MRN, medical record number; PHM, pediatric hospital medicine.
Note: This table categorizes resident responses in the semistructured interviews (n = 11). The subcategories are sorted from most frequently mentioned to least frequently mentioned. Illustrative quotes for each subcategory are included, which have been edited for brevity and clarity.
#
Survey Results
All 11 residents who participated in usability and acceptability testing also completed an anonymous survey. Surveyed users overall found the dashboard useful and easy to use, had a positive attitude toward using, and expressed intention to use the dashboard in the future ([Table 3]). Most encouragingly, 100% of users surveyed “strongly agreed” that the dashboard was useful. When asked how likely they were to recommend the dashboard to a coresident, the average response was 96% (on a scale of 0–100%). Comments in the survey closely aligned with those expressed verbally during the semistructured interview process.
Abbreviations: ATU, attitude toward using; IU, intention to use; PEU, perceived ease of use; PU, perceived usability.
Note: This table summarizes resident responses to an anonymous survey based on the technology acceptance model. Eleven resident users completed the survey.
4 = strongly agree, 3 = agree, 2 = disagree, 1 = strongly disagree.
#
#
Discussion
The purpose of this study was to assess the feasibility, acceptability, and usability of an automated dashboard that provides pediatric residents with individualized patient-care and quality metrics. We believe this is the first study to describe the creation of a dashboard that provides pediatric resident users with these types of metrics. Not only is the provision of patient-care and quality metrics required by accrediting bodies, but residents themselves desire more objective data as evidenced by the results of our needs assessment and prior studies.[1] [2] [6] In our semistructured interviews, residents repeatedly mentioned that this type of patient-care data would allow them to critically review their practice patterns and would be helpful in developing their individualized learning plan. Nearly all surveyed residents indicated a desire to use the core-competency diagnostic counts to help prioritize their learning efforts, especially with regards to directing their future patient-care encounters or electives. Several residents mentioned that this is the first objective data they have been provided by the residency program. Interestingly, some residents also commented on finding the metrics overall reassuring and described this type of data as being useful to combat “imposter syndrome.” Imposter syndrome is a common phenomenon among residents where one has a persistent fear of being inadequate, and has been shown to be a major contributor to burnout among physicians and trainees.[20] [21] Provision of this type of objective data may also help overcome the previously well-documented racial and gender bias in performance evaluation in medicine,[22] [23] [24] which was also mentioned by one resident tester.
Regarding usability and acceptability, survey and interview results indicate that residents overall had very positive experiences when using this dashboard. Residents rated the dashboard highly regarding ease-of-use and usefulness, with a positive attitude toward using it. They universally indicated that they would like to use the dashboard regularly in the future and would strongly recommend use of the dashboard to their coresidents. Prior studies have described barriers in developing dashboards for use by trainees, including challenges with patient attribution which can lead trainees to feel that metrics are not as meaningful.[5] [25] [26] [27] [28] In our study, resident users seemed overall to understand the limitations of the dashboard, but similarly reported that some metrics were less meaningful on an individual basis due to patient attribution limitations. Resident comments indicated that they felt some metrics were more reflective of decisions made by the care team rather than an individual, which is consistent with findings of other studies regarding the challenges of creating resident-specific performance metrics.[5] [26] [29] Another study has prioritized a list of resident-specific quality metrics which could mitigate this issue, but these metrics primarily focused on content captured within resident documentation (e.g., work of breathing or response to therapy documentation).[30] While these metrics would be very specific to the work of an individual resident, these data are very challenging to integrate into an automated tool without sophisticated natural language processing, so we were not able to include these metrics in this iteration of our dashboard.
Limitations
There are several limitations of this study. First, this was a pilot study, so our dashboard only provides data for patient encounters during residents' PHM rotation, which accounts for 4 months in a typical 36-month residency. Furthermore, our patient attribution was based exclusively on note writing, which may not perfectly reflect all patients cared for by a resident.[25] [28] A resident may have participated meaningfully in the care of a patient, but if no note was written (for example, if care occurred overnight) then this would not be captured by the dashboard. Additionally, at our institution upper-level residents (PGY2 and above) typically sign fewer notes than interns on this rotation, which makes the results of some parts of the dashboard less relevant as residents advance in their training.
In terms of core-competency diagnoses, we used ICD-10 codes to quantify diagnoses, which may not always accurately reflect the true diagnosis of a patient for several reasons including: the patient has many diagnoses and not every diagnosis was entered into the EHR by the care team, no appropriate ICD-10 code exists, or the patient was admitted with generic symptoms (e.g., fever) and after a diagnosis was made the ICD-10 code was not updated.
With regards to usability testing, we conducted this testing with 11 resident users distributed across years of training, but this is subject to sampling bias since we invited interested volunteers to pilot our tool. Additionally, social desirability bias may have impacted user responses during their usability testing and interview since they were being observed by a member of the developer team. The subsequent anonymous survey was administered to attempt to minimize this bias, and survey results were very similar to responses given during usability interviews.
Finally, creating this type of dashboard is labor intensive and requires institutional technical support and significant technical expertise, which may make it challenging to implement a similar tool at a smaller program or a program with fewer resources.
#
Future Directions
Following the successful implementation of the dashboard for the PHM rotation at our institution, we plan to expand the included resident rotations to capture the broader experience of pediatric residents. The most popularly requested areas for expansion included the emergency room and the intensive care unit. Once additional pediatric residency rotations are included in the dashboard, we would like to incorporate routinely scheduled review of dashboard metrics into residency feedback and mentorship sessions. We believe that this type of patient-care and quality data could not only be used by residents to direct their learning efforts, but in the future could also be utilized by program directors in designing and evaluating their residency structure and by the ACGME for program oversight to ensure that trainees are receiving the breadth and depth of experiences for adequate training. More study is needed to determine whether such dashboards will lead to an impact on resident quality-of-care metrics or breadth of clinical exposure.
#
#
Conclusion
We describe a unique solution to currently existing gaps in pediatric residency programs' ability to provide personalized, objective, and readily available patient-care and quality data to residents. By capitalizing on EHR and analytics capabilities, residency programs can develop automated dashboards capable of providing trainees with meaningful data regarding their patient care.
#
Clinical Relevance Statement
Despite the ACGME's requirements to provide residents with individualized performance data and quality metrics, there is very little research on this topic. Our article describes a way to create and display important individualized patient-care data to pediatric residents in an automated manner. Our results will help guide other residency training programs as they consider the types of data that they wish to provide to pediatric residents.
#
Multiple Choice Questions
-
Which type of testing is conducted by the development team prior to testing by end users?
-
Stress testing
-
Performance testing
-
Beta testing
-
Alpha testing
Correct Answer: The correct answer is option d. Alpha testing occurs when the internal development team tests the product before either usability testing or other testing by end users.
-
-
What type of testing occurs when you are asking users to try and complete typical tasks of a newly developed tool while observers watch and take notes?
-
Sanity testing
-
Usability testing
-
Integration testing
-
Acceptance testing
Correct Answer: The correct answer is option b. Usability testing is described here where the goal is to have end users walk through typical use scenarios and observers collect information to identify any usability problems prior to full deployment.
-
#
#
Conflict of Interest
None declared.
Acknowledgments
We would like to acknowledge the Texas Children's Hospital Information Services department for their generous support of this project with both technical resources and technician time and guidance, without whom this project would not have been possible.
Protection of Human and Animal Subjects
Our institutional review board reviewed and approved this study.
-
References
- 1 Accreditation Council for Graduate Medical Education.. Common Program Requirements. Accessed January 13, 2021 at: https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements
- 2 Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007; 29 (07) 648-654
- 3 pediatricsmilestones.pdf. Accessed November 23, 2020 at: https://www.acgme.org/portals/0/pdfs/milestones/pediatricsmilestones.pdf
- 4 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
- 5 Rosenbluth G, Tong MS, Condor Montes SY, Boscardin C. Trainee and program director perspectives on meaningful patient attribution and clinical outcomes data. J Grad Med Educ 2020; 12 (03) 295-302
- 6 Wright SM, Durbin P, Barker LR. When should learning about hospitalized patients end? Providing housestaff with post-discharge follow-up information. Acad Med 2000; 75 (04) 380-383
- 7 Ehrenfeld JM, McEvoy MD, Furman WR, Snyder D, Sandberg WS. Automated near-real-time clinical performance feedback for anesthesiology residents: one piece of the milestones puzzle. Anesthesiology 2014; 120 (01) 172-184
- 8 Wheeler K, Baxter A, Boet S, Pysyk C, Bryson GL. Performance feedback in anesthesia: a post-implementation survey. Can J Anaesth 2017; 64 (06) 681-682
- 9 Levin JC, Hron J. Automated reporting of trainee metrics using electronic clinical systems. J Grad Med Educ 2017; 9 (03) 361-365
- 10 Bachur RG, Nagler J. Use of an automated electronic case log to assess fellowship training: tracking the pediatric emergency medicine experience. Pediatr Emerg Care 2008; 24 (02) 75-82
- 11 2019 Accreditation Council for Graduate Medical Education (ACGME). ACGME Program Requirements for Graduate Medical Education in Pediatrics. . Published online July 1, 2019. Available at: https://www.acgme.org/globalassets/pfassets/programrequirements/320_pediatrics_2021v2.pdf
- 12 Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children's hospitals in the United States. J Hosp Med 2016; 11 (11) 743-749
- 13 Qlik Sense [Computer Software]. Version 3.1. King of Prussia, PA: Qlik; 2020
- 14 Stucky ER, Ottolini MC, Maniscalco J. Pediatric hospital medicine core competencies: development and methodology. J Hosp Med 2010; 5 (06) 339-343
- 15 Organization WH. ICD-10: International Statistical Classification of Diseases and Related Health Problems: Tenth Revision. World Health Organization; 2004. . Accessed February 19, 2021 at: https://apps.who.int/iris/handle/10665/42980
- 16 Shen MW, Percelay J. Quality measures in pediatric hospital medicine: Moneyball or looking for Fabio?. Hosp Pediatr 2012; 2 (03) 121-125
- 17 Parikh K, Hall M, Mittal V. et al. Establishing benchmarks for the hospitalized care of children with asthma, bronchiolitis, and pneumonia. Pediatrics 2014; 134 (03) 555-562
- 18 Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manage Inf Syst Q 1989; 13 (03) 319-340
- 19 Microsoft Excel for Mac [Computer Software]. Version 16.57. Redmond, WA: Microsoft Corporation; 2020
- 20 Mullangi S, Jagsi R. Imposter syndrome: treat the cause, not the symptom. JAMA 2019; 322 (05) 403-404
- 21 Gottlieb M, Chung A, Battaglioli N, Sebok-Syer SS, Kalantari A. Impostor syndrome among physicians and physicians in training: a scoping review. Med Educ 2020; 54 (02) 116-124
- 22 Liebschutz JM, Darko GO, Finley EP, Cawse JM, Bharel M, Orlander JD. In the minority: black physicians in residency and their experiences. J Natl Med Assoc 2006; 98 (09) 1441-1448
- 23 Nunez-Smith M, Ciarleglio MM, Sandoval-Schaefer T. et al. Institutional variation in the promotion of racial/ethnic minority faculty at US medical schools. Am J Public Health 2012; 102 (05) 852-858
- 24 Dayal A, O'Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med 2017; 177 (05) 651-657
- 25 Schumacher DJ, Wu DTY, Meganathan K. et al. A feasibility study to attribute patients to primary interns on inpatient ward teams using electronic health record data. Acad Med 2019; 94 (09) 1376-1383
- 26 Smirnova A, Sebok-Syer SS, Chahine S. et al. Defining and adopting clinical performance measures in graduate medical education: where are we now and where are we going?. Acad Med 2019; 94 (05) 671-677
- 27 Epstein JA, Noronha C, Berkenblit G. Smarter screen time: integrating clinical dashboards into graduate medical education. J Grad Med Educ 2020; 12 (01) 19-24
- 28 Mai MV, Orenstein EW, Manning JD, Luberti AA, Dziorny AC. Attributing patients to pediatric residents using electronic health record features augmented with audit logs. Appl Clin Inform 2020; 11 (03) 442-451
- 29 Sebok-Syer SS, Pack R, Shepherd L. et al. Elucidating system-level interdependence in electronic health record data: what are the ramifications for trainee assessment?. Med Educ 2020; 54 (08) 738-747
- 30 Schumacher DJ, Holmboe ES, van der Vleuten C, Busari JO, Carraccio C. Developing resident-sensitive quality measures: a model from pediatric emergency medicine. Acad Med 2018; 93 (07) 1071-1078
Address for correspondence
Publication History
Received: 03 October 2021
Accepted: 05 February 2022
Article published online:
16 March 2022
© 2022. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Accreditation Council for Graduate Medical Education.. Common Program Requirements. Accessed January 13, 2021 at: https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements
- 2 Swing SR. The ACGME outcome project: retrospective and prospective. Med Teach 2007; 29 (07) 648-654
- 3 pediatricsmilestones.pdf. Accessed November 23, 2020 at: https://www.acgme.org/portals/0/pdfs/milestones/pediatricsmilestones.pdf
- 4 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
- 5 Rosenbluth G, Tong MS, Condor Montes SY, Boscardin C. Trainee and program director perspectives on meaningful patient attribution and clinical outcomes data. J Grad Med Educ 2020; 12 (03) 295-302
- 6 Wright SM, Durbin P, Barker LR. When should learning about hospitalized patients end? Providing housestaff with post-discharge follow-up information. Acad Med 2000; 75 (04) 380-383
- 7 Ehrenfeld JM, McEvoy MD, Furman WR, Snyder D, Sandberg WS. Automated near-real-time clinical performance feedback for anesthesiology residents: one piece of the milestones puzzle. Anesthesiology 2014; 120 (01) 172-184
- 8 Wheeler K, Baxter A, Boet S, Pysyk C, Bryson GL. Performance feedback in anesthesia: a post-implementation survey. Can J Anaesth 2017; 64 (06) 681-682
- 9 Levin JC, Hron J. Automated reporting of trainee metrics using electronic clinical systems. J Grad Med Educ 2017; 9 (03) 361-365
- 10 Bachur RG, Nagler J. Use of an automated electronic case log to assess fellowship training: tracking the pediatric emergency medicine experience. Pediatr Emerg Care 2008; 24 (02) 75-82
- 11 2019 Accreditation Council for Graduate Medical Education (ACGME). ACGME Program Requirements for Graduate Medical Education in Pediatrics. . Published online July 1, 2019. Available at: https://www.acgme.org/globalassets/pfassets/programrequirements/320_pediatrics_2021v2.pdf
- 12 Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children's hospitals in the United States. J Hosp Med 2016; 11 (11) 743-749
- 13 Qlik Sense [Computer Software]. Version 3.1. King of Prussia, PA: Qlik; 2020
- 14 Stucky ER, Ottolini MC, Maniscalco J. Pediatric hospital medicine core competencies: development and methodology. J Hosp Med 2010; 5 (06) 339-343
- 15 Organization WH. ICD-10: International Statistical Classification of Diseases and Related Health Problems: Tenth Revision. World Health Organization; 2004. . Accessed February 19, 2021 at: https://apps.who.int/iris/handle/10665/42980
- 16 Shen MW, Percelay J. Quality measures in pediatric hospital medicine: Moneyball or looking for Fabio?. Hosp Pediatr 2012; 2 (03) 121-125
- 17 Parikh K, Hall M, Mittal V. et al. Establishing benchmarks for the hospitalized care of children with asthma, bronchiolitis, and pneumonia. Pediatrics 2014; 134 (03) 555-562
- 18 Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manage Inf Syst Q 1989; 13 (03) 319-340
- 19 Microsoft Excel for Mac [Computer Software]. Version 16.57. Redmond, WA: Microsoft Corporation; 2020
- 20 Mullangi S, Jagsi R. Imposter syndrome: treat the cause, not the symptom. JAMA 2019; 322 (05) 403-404
- 21 Gottlieb M, Chung A, Battaglioli N, Sebok-Syer SS, Kalantari A. Impostor syndrome among physicians and physicians in training: a scoping review. Med Educ 2020; 54 (02) 116-124
- 22 Liebschutz JM, Darko GO, Finley EP, Cawse JM, Bharel M, Orlander JD. In the minority: black physicians in residency and their experiences. J Natl Med Assoc 2006; 98 (09) 1441-1448
- 23 Nunez-Smith M, Ciarleglio MM, Sandoval-Schaefer T. et al. Institutional variation in the promotion of racial/ethnic minority faculty at US medical schools. Am J Public Health 2012; 102 (05) 852-858
- 24 Dayal A, O'Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med 2017; 177 (05) 651-657
- 25 Schumacher DJ, Wu DTY, Meganathan K. et al. A feasibility study to attribute patients to primary interns on inpatient ward teams using electronic health record data. Acad Med 2019; 94 (09) 1376-1383
- 26 Smirnova A, Sebok-Syer SS, Chahine S. et al. Defining and adopting clinical performance measures in graduate medical education: where are we now and where are we going?. Acad Med 2019; 94 (05) 671-677
- 27 Epstein JA, Noronha C, Berkenblit G. Smarter screen time: integrating clinical dashboards into graduate medical education. J Grad Med Educ 2020; 12 (01) 19-24
- 28 Mai MV, Orenstein EW, Manning JD, Luberti AA, Dziorny AC. Attributing patients to pediatric residents using electronic health record features augmented with audit logs. Appl Clin Inform 2020; 11 (03) 442-451
- 29 Sebok-Syer SS, Pack R, Shepherd L. et al. Elucidating system-level interdependence in electronic health record data: what are the ramifications for trainee assessment?. Med Educ 2020; 54 (08) 738-747
- 30 Schumacher DJ, Holmboe ES, van der Vleuten C, Busari JO, Carraccio C. Developing resident-sensitive quality measures: a model from pediatric emergency medicine. Acad Med 2018; 93 (07) 1071-1078