Keywords
dashboard - guideline - performance
Background and Significance
Background and Significance
Quality improvement (QI) targets unwarranted variations in care.[1] National guidelines for bronchiolitis, the fourth most common reason for pediatric hospitalization in the United States, encourage limiting disproven treatments (e.g., antibiotics).[2]
[3] QI initiatives for bronchiolitis have proven effective[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12] but rely on timely data for audit and feedback.[13]
[14] Traditional methods to obtain data may be challenging in some organizations.[15]
[16]
At our organization we had several challenges contributing to variation in care for bronchiolitis, including (1) lack of accessible clinical guidelines, (2) minimal data support for QI initiatives, and (3) lack of organizational process for providing feedback to clinicians. In 2015 we joined a national QI initiative, “Stewardship in Improving Bronchiolitis,” from the American Academy of Pediatrics (AAP)[12] which provided tools and coaching to a local bronchiolitis QI workgroup and required close tracking of data to design targeted interventions. Our initial data request through the organization's data warehouse took 6 months to be completed, prompting us to review alternate options for acquiring data.
Visual analytics dashboards are one mechanism to overcome repeated manual electronic health record (EHR) queries to support QI integrating data into a user interface enabling tracking, planning, and comparisons with near real-time data from the EHR and other sources.[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31] Users may modify inclusions/exclusions to focus on their population of interest without repeat data requests.[28]
[29]
[30]
Objective
We aimed to describe the use of a visual analytics dashboard to support a multifaceted QI initiative for patients with bronchiolitis. We also intended to determine if the dashboard directly impacted the success toward aims of the multifaceted QI initiative, which were to achieve 20% reductions in the use of chest X-rays, bronchodilators, antibiotics, steroids, and viral testing in patients with bronchiolitis by April 2018.
Methods
The QI initiative took place at Children's Minnesota (CM), a large, independent, not-for-profit, tertiary children's health care organization with approximately 14,000 inpatient and 96,000 emergency department (ED) visits annually. There were five key interventions to the QI initiative. First, we joined a national bronchiolitis QI collaborative,[12] and second gathered a CM bronchiolitis workgroup (October 2015)—including representatives from ED, hospitalists, nursing, respiratory therapy, information technology (IT), pharmacy, and critical care—to determine local QI interventions. Third, clinicians received education (e.g., evidence behind limiting bronchodilators) at staff meetings and CM Grand Rounds January and October 2016. Nurses and respiratory therapists received education via modules and e-mail newsletters. Fourth, (February 2016) we published a local modification of the AAP 2014 bronchiolitis guideline[2] and a companion order-set to our intranet and EHR (Cerner[32]).
Following local QI interventions there were anecdotal improvements but data delays emerged as a key barrier to success; data requested in March 2016 was delivered in August 2016. As data turnaround was crucial for targeted interventions we sought an alternate method to obtain data. Workgroup members had used operational dashboards[33] (implemented 2011) and felt a clinical dashboard might improve bronchiolitis data procurement.
As a fifth QI intervention a pediatric hospitalist (bronchiolitis workgroup lead) and an IT dashboard developer partnered in modifying a vendor's analytic dashboard[33] for use in bronchiolitis. They determined target patient population based upon previous guidelines/studies,[2]
[34] categorized clinicians (ED vs. observation/inpatient [hereafter referred to as inpatient]), categorized tests/treatments (e.g., medications considered “antibiotics”), determined display metrics, benchmarks,[34] and verified data accuracy over ∼6 months. The hospitalist monitors the dashboard monthly during bronchiolitis season and works with the IT dashboard developer to resolve data accuracy or display issues.
Organizational leaders were granted dashboard access; feedback from early users was collected informally over 1 month and resulted in minor changes. Issues with accuracy of individual hospitalist data related to resident order entry led to limitation of individual log-ins to ED clinicians and organizational leaders.
The primary objective of this case study was to describe use of a bronchiolitis dashboard in the context of a multifaceted QI initiative. The process measure was the percentage of individual ED clinicians who logged in to the dashboard, obtained from IT records. QI outcome measures were percent use of chest radiographs, bronchodilators, antibiotics, steroids, and viral testing.[2] Balancing measures included length of stay (LOS), charge (hospital facility charges and professional fees presented as ratios per CM policy), and 7-day same-cause ED revisits or hospital readmissions.
Patients 2 months to 2 years old seen in ED/inpatient settings at CM with bronchiolitis (International Classification of Disease 9 or 10 codes 466.11, 466.19 or J21.x) in the period October 1, 2014 to April 30, 2018 were included. Data from May to September of each year were excluded as we suspected there may be appropriately higher use of nonrecommended tests/treatments outside of the typical bronchiolitis season and elected to focus QI interventions on peak season. Patients in the intensive care units or with a secondary diagnosis of asthma, pneumonia, or underlying complex chronic condition (including gestational age < 27 weeks)[35] were excluded.
We used statistical process control (SPC) p-charts for outcome measures using QI Macros (KnowWare International Inc., version 2018). October 2014 to April 2015 represented the baseline period for calculation of upper and lower control limits; a process change was indicated for the implementation period of October 2015 to April 2018 to calculate new control limits. We determined a priori[3] 20% reductions in the baseline metric mean to be clinically relevant, based upon our preimplementation QI aim. Measures were separated into ED/inpatient based upon ordering clinician. Resident/fellow orders are assigned to the attending clinician on record at the time of the order; this was not modifiable on the dashboard. Balancing measures were analyzed with chi-squared tests for dichotomous outcomes and independent t-tests for continuous outcomes using STATA version 13.0.[36] p-Values < 0.05 were statistically significant.
Clinicians were not required to review personal data compared with peers and there were no implications (e.g., financial) to their performance. There were no conflicts of interest. This study was deemed QI and exempt from further review by the CM's Institutional Review Board.
Results
We implemented a bronchiolitis dashboard in August 2016, see [Fig. 1]. Default settings restrict the dataset to the goal patient population; users can add a chart or change inclusions/exclusions to analyze specific populations/units. Green and red colors indicate performance better than or worse than the benchmark,[34] respectively. Target metrics can also be displayed by demographic categories, such as primary payer or race. Thirty-five percent (20/57) of ED clinicians logged in to the dashboard at least once.
Fig. 1 Sample views of the bronchiolitis visual analytic dashboard.
See [Fig. 2] for sample SPC charts for bronchodilator use with annotation of QI interventions; see online supplemental materials for SPC charts and summary table of percent change for all metrics ([Supplementary Figs. S1] and [S2], [Table S1], available in the online version). There was[3] 20% difference in all target metrics with the exception of ED and inpatient antibiotics and inpatient viral testing. For example, there was a shift in mean bronchodilator use from 66.7 to 43.8% in the ED and 72.1 to 46.4% in inpatients (33 and 36% reductions, respectively).
Fig. 2 (A, B) Statistical process control p-charts for bronchodilator use in patients with bronchiolitis. Baseline period: October 1, 2014 to April 30, 2015; implementation period: October 1, 2015 to April 30, 2018. CL, control limit (mean); LCL, lower control limit; UCL, upper control limit.
See [Table 1] for balancing measures. Comparing baseline with implementation periods, there were improvements in all ED balancing measures with a higher ED discharge rate (70.7 vs. 72.8%, p = 0.05), lower charges (ratio 1:0.86, p < 0.001), shorter LOS (2.9 vs. 2.6 hours, p = 0.001), and lower 7-day revisit rates (15.4 vs. 11.6%, p < 0.001). Inpatient charges increased (ratio 1:1.14, p = 0.01) but LOS and readmissions remained stable.
Table 1
Balancing measures across emergency department (ED) and observation/inpatient (admitted) care settings before (baseline) and after implementation of a multifaceted quality improvement initiative for patients with bronchiolitis
|
Baseline
(October 2014–April 2015)
|
Implementation
(October 2015–April 2018)
|
p-Value
|
Discharged home from ED, n (%)
|
N = 2,035 (70.7)
|
N = 3,965 (72.8)
|
0.05
|
Mean ED length of stay, hours (95% CI)
|
2.9 (2.7–3.1)
|
2.6 (2.5–2.6)
|
<0.001
|
Mean ED charges, ratio
|
1
|
0.86
|
<0.001
|
7-Day ED revisit, n (%)
|
313 (15.4)
|
458 (11.6)
|
<0.001
|
Admitted, n (%)
|
N = 843 (29.3)
|
N = 1,485 (27.2)
|
0.05
|
Mean inpatient length of stay, days (95% CI)
|
2.3 (2.2–2.4)
|
2.6 (2.4–2.8)
|
0.07
|
Mean inpatient charges, ratio
|
1
|
1.14
|
0.01
|
7-Day readmission, n (%)
|
8 (0.95)
|
10 (0.67)
|
0.37
|
Abbreviation: CI, confidence interval.
Two targeted interventions resulted from the use of the dashboard. First, in response to high inpatient bronchodilator use noted in April 2016 (68%) we developed an EHR bronchodilator order alert (implemented mid-month, October 2016). Second, we exported individual scorecards from the dashboard to all ED clinicians in November 2017 with personal 2016 to 2017 data compared with peers. This process was repeated in late March 2018 with data from the 2017 to 2018 season. We were unable to determine which ED clinicians reviewed their scorecard.
Discussion
Health care dashboards have been used to improve workflow, reduce preventable harms, and track metrics across multiple sites.[17]
[18]
[20]
[21]
[22]
[23]
[24]
[26]
[27]
[31]
[37] We used a bronchiolitis clinical dashboard to support a multifaceted QI initiative for bronchiolitis. While improvement shifts were seen in most target metrics it is unlikely, based upon timing and low individual use, that these improvements were a direct result of the dashboard. The dashboard was viewed by leaders as instrumental in tracking adherence to guideline metrics and was used to inform additional targeted interventions.
The low individual log-in rate (35%) found in our study may reflect low team engagement, a key for QI success,[38] and may have tempered improvements in outcome measures. We suspect that because the dashboard required a separate login from the EHR, individual clinicians may have perceived low ease of use[39]
[40] or were uninterested in their data.
Our data, as well as other QI studies without use of a dashboard, suggest that the other bronchiolitis QI interventions, such as the guideline and order-set, caused the observed improvements.[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12] However, QI is reliant on having on timely data to perform Plan-Do-Study-Act cycles.[41] We experienced difficulty obtaining data (6 months) driving our decision to develop a dashboard where data are loaded to the dashboard within approximately 2 to 4 days of patient discharge, enabling near real-time feedback and targeted interventions. This dashboard allowed us to understand the impact of the other QI interventions.
While we saw significant shifts in all metrics aside from antibiotic use and inpatient viral testing, there was a relatively greater improvement in both outcome and balancing measures in the ED versus inpatients. It is possible that interventions targeted to ED clinicians alone (individual dashboard log-in, individual scorecards) would have contributed to this difference; however, we suspect this did not occur as the timing of ED improvements relative to the inpatient setting suggests that other factors may have been more important, such as shifts in patient severity evidenced by a higher rate of ED discharge in implementation. There may also have been differences in guideline uptake, lack of individual hospitalist feedback, or difficulty in changing culture.[6]
[42]
[43]
[44]
[45] Future studies should examine reasons for bronchiolitis guideline nonadherence.
There were no clinically relevant changes in antibiotic use in either setting (14% decrease in ED, 14% increase in inpatients). By using the dashboard to add the option to exclude patients with otitis media the antibiotic rate was under 5%, indicating alternate appropriate indications for high antibiotic use not captured by original data definitions.
Balancing measure improvements included shorter ED LOS and lower ED charges in the implementation periods. This is consistent with a previous study which found that higher adherence to a bronchiolitis guideline was associated with lower LOS and costs.[46] Inpatient LOS was unchanged and inpatient charges were slightly higher in our study. Our QI initiative did not target hospital LOS—which largely correlates with charges/costs in the inpatient setting—as such these findings were not surprising.
Future directions for this QI initiative include increasing clinician engagement and feedback and use of the dashboard to design and track new group goals (e.g., reducing inpatient viral testing). Future initiatives may apply heuristic evaluation to improve dashboard visualization[47] and explore barriers to use.
Challenges included attributing orders to appropriate clinicians in a teaching environment and a moderate learning curve. We found it critical to partner a clinician with the IT dashboard developer to determine default settings and understand workflow limitations. Due to low individual use it is difficult to determine the impact of individual performance review or the direct impact of the dashboard on outcomes. A final limitation to this study is that the cost of the QI initiative, including the dashboard which is subject to vendor negotiation, was not directly measured.
Conclusion
We described use of a visual analytics dashboard in a multifaceted bronchiolitis QI initiative. Subsequent to multiple QI interventions we reduced use of most nonrecommended tests and treatments in patients with bronchiolitis and improved ED balancing measures. However, timing of improvements and low individual clinician use suggest that the dashboard did not directly impact outcomes. The dashboard was helpful in overcoming organizational barriers to QI data procurement and in tracking the impact of other QI interventions and may be considered as a tool for other organizations with similar challenges.
Clinical Relevance Statement
Clinical Relevance Statement
This case report has relevance for clinicians, IT specialists, and QI specialists as it describes the use of a visual analytics dashboard to inform QI initiatives and improve clinical care.
Multiple Choice Questions
Multiple Choice Questions
-
Which of the following may limit the ability to accurately attribute individual orders to individual attending clinicians on a visual analytics dashboard?
-
Clinician license type (e.g., MD vs. NP).
-
Attribution of resident orders.
-
Time of clinician order.
-
Type of order (e.g., medication vs. laboratory test).
Correct Answer: The correct answer is option b, attribution of resident orders. One challenge in our QI initiative was how to deal with resident orders. The organization EMR assigns resident orders to the attending clinician of record at the time the order was placed. We found this to be accurate in the ED setting but not in the inpatient (e.g., hospitalist) setting where the attending provider may change multiple times in a day.
-
Who might be the best partner for an IT dashboard developer when creating a visual analytics dashboard to support a clinical QI initiative?
-
A resident on a 1-month QI elective.
-
Another software programmer within the IT department.
-
A front-line clinical leader such as an ED clinician or hospitalist with QI expertise.
-
The chief financial officer.
Correct Answer: The correct answer is c, a front-line clinical leader such as an ED clinician or hospitalist with QI expertise. When bringing IT to the forefront of clinical work it is important to partner IT experts with front-line users and content experts. A lesson learned in our QI initiative was that it was crucial to have the IT developer partner with the QI clinical leader (a hospitalist) to review the dashboard, modify visual displays, and review data for validation.