CC BY 4.0 · ACI open 2020; 04(01): e35-e43
DOI: 10.1055/s-0040-1702213
Original Article
Georg Thieme Verlag KG Stuttgart · New York

Visualization of Electronic Health Record Data for Decision-Making in Diabetes and Congestive Heart Failure

Shira H. Fischer
1   RAND Corporation, Boston, Massachusetts, United States
2   Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States
3   Division of General Internal Medicine, Brigham & Women's Hospital, Boston, Massachusetts, United States
,
Charles Safran
4   Division of Clinical Informatics, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States
,
Krzysztof Z. Gajos
5   Harvard Paulson School of Engineering and Applied Sciences, Cambridge, Massachusetts, United States
,
Adam Wright
6   Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, United States
› Author Affiliations
Funding This work was supported by grant support from NIH training grant T15LM007092.
Further Information

Address for correspondence

Shira H. Fischer, MD, PhD
RAND Corporation
20 Park Plaza, Suite 920, Boston, MA 02116
United States   

Publication History

06 June 2019

18 December 2019

Publication Date:
25 March 2020 (online)

 

Abstract

Objective The aim of this study is to study the impact of graphical representation of health record data on physician decision-making to inform the design of health information technology.

Materials and Methods We conducted a within participants crossover design study using a simulated electronic health record (EHR) in which we presented cases with and without visualized data designed to highlight important clinical trends or relationships, followed by assessment of the impact on decision-making about next steps for patients with chronic diseases. We then asked whether trends were observed and about usability and satisfaction using validated usability questions and asked open-ended questions as well. Time to answer questions was also collected.

Results Twenty-one primary care providers participated in the study, including five for testing only and sixteen for the full study. Questions about clinical assessment or next actions were answered correctly 55% of the time. Regarding objective trends in the data, participants described noticing the trends 85% of the time. Differences in noticing trends or difficulty level of questions were not statistically significant. Satisfaction with the tool was high and participants agreed strongly that it helped them make better decisions without adding to the time it took.

Discussion The simulation allowed us to test the impact of a visualization on clinician practice in a realistic setting. Designers of EHRs should consider the ways information presentation can affect decision-making.

Conclusion Testing visualization tools can be done in a clinically realistic context. Providers desire visualizations and believe that they help them make better and faster decisions.


#

Background and Significance

The purpose of the medical record is more than just documenting what has happened. S.J. Reiser wrote in 1991 that the purpose of the clinical record is, “to recall observations, to inform others, to instruct students, to gain knowledge, to monitor performance, and to justify interventions.”[1]

We are at a critical point in the transition of medical records from written to electronic, for reasons both technological as well as political; the American Recovery and Reinvestment Act and meaningful use[2] legislation, among other factors, have contributed to a great increase in use of electronic records by physicians and hospitals in the United States.[3] [4] [5] This increase in use, however, has not been accompanied by a commensurate effort directed toward the helpfulness of these electronic health records (EHRs) to clinicians in the care of their patients.[6] Until recently, the EHR has been made of digitized text with limited visualizations, often presenting laboratory data only in table form. Some EHRs have visualization capabilities, differing in design, but the impact of these visualizations on clinical decision-making has not been tested.

Presentation of data affects decision-making in medicine according to prior work; one study before EHRs were prevalent showed that data display affected physician investigators' decisions regarding hypothetical clinical trials[7] although there was some controversy about those results[8]; and another suggested that outliers were overidentified by physicians in tables versus charts.[9] Research has looked at the best type of graph for different situations but not in a medical context[10] [11] [12] [13] and show that usability is generally first measured for new visualizations, with clinical impact looked at later.[14] Another temporal view visualization was identified in the literature after our design was complete, addressing some of the same limitations in current views, but this mock-up was not tested on clinicians.[15] Work on visualization of medical information has measured the impact of visualization of laboratory data on provider identification of trends, time to decision, and preference, and found that trends are more easily seen in graph form in many cases, but these studies were conducted without any clinical context, patient history, or EHR.[16] [17] Recent work in the medical software design has also shown the importance of involving providers and considering workflow as part of software design.[18] However, most research thus far has only asked providers to interpret trends and outliers from small sets of medical data; it has not taken the next step to ask about resulting clinical decisions and did not simulate a medical context with a full medical record.

Studies of information visualization have historically had four areas of focus: design, usability, controlled experiments, and case studies in a realistic context. Case studies are the least common but very important for “demonstrating feasibility and in-context usefulness.”[19]

We aimed to study the potential for data visualization to bring trends and relationships to the attention of clinicians in a realistic context.


#

Materials and Methods

This experimental study examined graphical representation of data otherwise represented as text and numbers in clinical decision-making. This study was conducted at the HealthCare Associates practice at Beth Israel Deaconess Medical Center, a 621-bed tertiary care center in Boston, MA, and a principal teaching hospital of Harvard Medical School. This research was reviewed and deemed exempt by the hospital's institutional review board.

To test the impact of graphical representation on decision-making, we simulated physicians making decisions as they usually do, with a case history and supplemental information from laboratories and history and physiologic data in the EHR. We considered three parts of graph comprehension as described by many authors: extracting data, finding relationships, and moving beyond the data to decisions.[20] Our design aimed to measure the ability of the graph to achieve all of these things. Using a within-subjects design, each participant saw two diabetes cases and then two congestive heart failure (CHF) cases. For each disease, one case had a visualization and one did not. The order of seeing visualizations was randomized within that framework, such that everyone saw two cases with visualization and two without, one each for each disease.

Thus, the experiment required the following steps: (1) building a visualization to represent, but in no way add to the data already presented; (2) creating a simulated EHR as similar as possible to the one the providers use in their daily work, so as to mimic the real setting, but with a narrowed focus highlighting the factors that contribute to the clinical decision; (3) designing realistic cases in which decision-making hinges upon perception of trends or relationships in clinical data to test the impact; and (4) testing the impact of the visualization, including measuring decision-making in an objective manner to assess impact and measuring usability and satisfaction. Each of these steps will be discussed below.

Visualization Development

After review of the key literature on the topic of design and presentation of scientific data, as well as the influential work of designers like Tufte[21] [22] [23] and Few,[24] [25] [26] a prototype was developed that was designed to emphasize the relationship between medications, laboratory results, and weight, and all key data points in our two chronic diseases. While an increasing number of existing EHRs allow graphing of single or even multiple measures like laboratories, the novelty of our approach was designing views that combined trends in laboratories with vital signs, hospitalizations, and medication dates and doses.

As illustrated in [Fig 1A–C], numerical values were available via hovering, in line with the recommendation that “visualizations should emphasize trends and relationships among variables while also providing access to individual numerical values.”[16] Medications were listed below the laboratory value and weight graphs on the same time scale so that medication starts and stops could be easily correlated with physiologic responses. Dose changes were represented both with text and color. Hovering on medications gave exact date information. All the original data on which the graphs were based were still available to be viewed. Iterative design and feedback from clinicians resulted in the final model tested in this study.

Zoom Image
Fig. 1 (A) Face sheet with visualization. Face sheet for online medical record for patient with heart failure. Pink text indicates a link. Green button inside red circle added to EHR. Otherwise, this is similar to the current EHR in use. Patient name and data are fictitious. (B) Data tables: standard presentation of historical data. (C) Visualization for case 4: new display of same data. This visualization appears when green button clicked as well as on pages with historical data for these elements. It displays multiple indicators for heart failure patients on the same time scale: weight with medications and doses as well as hospitalizations. Beginning and ending numbers are printed; individual data points are indicated by small circles and the values can be seen by hovering over the circles. Exact dates for medications are also visible when hovering. EHR, electronic health record.

The visualization was created using Google Chart Application Programming Interface (API)[27] and integrated into the HTML-based EHR simulation.


#

Simulation

We created a simulated EHR that looked identical to the one of these providers use daily, preserving formatting and interface. However, the real EHR has numerous tabs and voluminous data. To focus the study, navigation was constrained so that participants could only explore the areas of interest: medications, relevant laboratory results, weight, and notes, depending on the case. The control cases displayed only our simulated EHR, with these relevant pages available as in their usual clinical practice. The intervention cases displayed a manipulated and enhanced version of the EHR, identical in all ways but with the addition of a visualization.


#

Case Design

We developed cases in which decision-making relied upon perception of trends in laboratory data that might not otherwise be noticed, or relationships between medications and those changes that again, might not be noticed in the standard EHR, in which medications appear on a different page than laboratory results and in which often, only the most recent result is visible. Such cases are common in medical practice.

We selected diabetes and congestive heart failure as the two main diagnoses for our cases. These are two very common chronic diseases in the primary care setting, and patients with these diagnoses are often complex patients with multiple laboratory measurements and many medications.

In close consultation with a diabetologist, a primary care doctor, and a hospitalist, we developed four cases of approximately equal complexity, regarding diabetes or heart failure [Supplementary Appendix A] (available in the online version).


#

Survey

To test the impact of the visualization, we constructed questions with the right and wrong answers based on objective criteria. The goal was questions that relied on seeing the trends and relationships that the visualization would theoretically make easier to see.

After the right/wrong questions were completed, at the end of each case, participants were also asked to indicate whether or not they noticed the designed trend or relationship. These questions unlike the first set were yes/no and not right/wrong; participants may honestly answer no. In cases where the objective questions were not answered correctly, these questions could serve as indicators of whether the trend was perceived and there was some other barrier to the decision-making or whether the trend was missed, thus measuring a step along the way from data processing to decision-making. For that reason, we thought it important to support our identification of the correctness of an answer with evidence that the reasons supporting the decision were correct as well.

In summary, we designed three sets of questions: multiple-choice questions with objectively right answers intended to assess the effectiveness of the graphical representation to generate a correct interpretation by the subject; a second set of questions were intended to elicit perceptions about trends; and a final set intended to assess acceptance and satisfaction. Lastly, we asked a series of standardized questions about usability of the tool.

The cases and the questions appear in [Supplementary Appendix A] (available in the online version).


#

Experimental Procedure

Participants were recruited from among primary care providers at a busy hospital-based primary care practice within our hospital. We aimed for diversity of participants by sex and years of experience as we reached out to providers, but no volunteers were turned away.

The EHR presented to the providers was almost identical to the one they use in their daily work (with some of the links not active so they would focus on the values of interest). The intervention cases only had a large green new button on the problem-based face sheet labeled “show my data” which brought up a visualization, and this graph also appeared on the relevant other pages (such as weight, creatinine, or medications) when selected. A browser-based timing tool measured the time taken to enter both the answer response and the written comments for each question on the testing laptop.

Questions on the associated survey included clinical decision-making, questions about whether respondents noticed trends, and demographic information about the participants. Every question was followed by a free-text area, and participants were asked to include their reason for their answer, particularly in cases where they felt there was no single right answer or they did not want to select any or only one of the choices (not infrequent). At the end of the survey, we asked for feedback about the visualization, whether it was helpful, and how it could be improved.

Results were parsed in Python,[28] and the resulting answer and time data were imported into R[29] and Stata[30] for analysis. The data were stored and accessed on password-protected computers.


#
#

Results

Participants

The study was conducted on 16 individuals after a pilot round on five participants whose feedback was included only in the qualitative and usability analyses. All participants were primary care providers (15 doctors including attending physicians and internal medicine residents, and one nurse practitioner). English is the language of patient interaction at the clinic.

Subject characteristics are described in [Table 1].

Table 1

Study subjects

Subjects (n = 16, not including pilot, not all questions answered by every individual)

n

Percentage

Sex

Female

7

44%

Male

9

56%

First language

Chinese

1

7%

English

14

93%

Specialty

Medicine (15 MD, 1 nurse practitioner)

16

100%

Years in practice (since MD degree)

0–5 years

7

47%

6–10 years

1

7%

11–15 years

1

7%

16–20 years

2

13%

21–25 years

3

20%

31–35 years

1

7%

Age: mean (SD)

39.4 years (11.8)

15

Abbreviations: MD, doctor of medicine; SD, standard deviation.


Three outcomes were examined: the impact of the visualization on correct decision-making as determined by the experts consulted in creating the cases and questions; the time taken to arrive at answers; and satisfaction with the tool as measured by the final part of the questionnaire. Each will be addressed below.

There was minimal missing data. When an answer was missing, scores were calculated using the average of the answered questions.

Frequency tables for answers by case versus presence of visualization are presented by case in [Table 2].

Table 2

Accuracy

Correct answer

Case 1

Case 2

Case 3

Case 4

% Correct

49%

55%

52%

72%

Viz absent

52%

61%

25%

76%

Viz present

43%

53%

79%

67%

p value

0.32

0.52

0.03

0.54

Noticed trends

Case 1

Case 2

Case 3

Case 4

% Noticed

81%

75%

93%

97%

Viz absent

75%

78%

94%

93%

Viz present

92%

73%

93%

100%

p value

0.52

0.68

1.00

0.35

Note: Total correct calculated on all questions; when divided by visualization, percentages calculated over individuals.


Bold is higher rate of correct answer, but not statistically significant differences between groups except for % correct in case 3 (p = 0.03, uncorrected Mann–Whitney–Wilcoxon's test).


Overall accuracy rates varied by question and by individual. Rates of accuracy increased through the study, with more answering question for case 4 correctly than questions for case 1. This may have to do with familiarity with the format of the tool. We anticipated this, and for this reason, we randomized visualizations within the cases. Because scores were not normally distributed, we used the Mann–Whitney–Wilcoxon's Test, a nonparametric tests, that allows for paired comparisons within individuals and comparisons between items.

In one case, case 3, the respondents answered the questions correctly a significantly higher percent of the time; in the other three cases, the differences were not significant ([Table 2]). By question, the percent of respondents who noticed the trend was the same or higher with visualization than without for every question but two, with overall high rates of noticing trends with and without, but these between-group differences were not statistically significant.

The questionnaire also solicited difficulty level for each question. This was rated relatively constantly among the four cases ([Table 3]). In each case, the average difficulty level was rated higher by those participants who did not see a visualization compared with those who did, meaning the question was perceived as harder without the visualization, but this difference did not achieve statistical significance. Neither sex nor years of experience were significantly associated with either outcome (correct answer or noticing trends). Using generalized estimating equations to account for repeated measures within an individual, the model did not show significant differences between responses with visualization and those without either for the multiple-choice questions or for the questions regarding noticing trends.

Table 3

Question difficulty and correctness and noticed by question and by visualization

Percent correct answer

Percent noticed trends

Mean difficulty rating (1–7)

Mean difficulty without Viz

Mean difficulty with viz present

Case 1

49%

81%

3.38

3.60

3.00

Case 2

55%

75%

3.50

3.67

3.40

Case 3

52%

93%

4.07

4.63

3.43

Case 4

72%

97%

3.47

3.57

3.38

Note: Bold is easier or closer to 1, not statistically significantly different.



#

Time

The average survey time was 22 minutes (range: 14–42 minutes; SD: 7.5 minutes). The total time for each case was lower in three of four cases with visualizations than without, suggesting that the visualizations may have helped people make decisions faster, but these differences were not statistically significant and therefore are not reported in detail here.


#

Satisfaction and Acceptance

Responses from the first five pilot participants were included in the subjective feedback analysis as the visualizations did not change between the pilot and the actual experiment, and the pilot feedback was also helpful, but due to slight corrections and changes after the pilot round, these initial data could not be included in the full analyses.

After going through the four cases, we used a 19-question usability survey based on the standardized Computer System Usability Questionnaire,[31] with each study participant providing answers ranging from “strongly disagree” to “strongly agree” (a 7-point Likert scale). The questions covered satisfaction, ease of use, finding information, helpfulness of the system, interface, and completeness. The questions and responses are presented in [Fig. 2] and [Table 4].

Zoom Image
Fig. 2 Usability and satisfaction. Questionnaire responses. Questions 9 to 11 were not relevant to our tool.
Table 4

Usability and satisfaction

Question (based on standard CSUQ tool)

n (the rest skipped/do not apply)

Mean response (1 = strongly disagree and 7 = strongly agree)

Standard deviation

Qual 1. Overall, I am satisfied with how easy it is to use this system

20

6.15

0.81

Qual 2. It was simple to use this system

20

6.15

1.04

Qual 3. I can effectively complete my work using this system

20

6.00

1.30

Qual 4. I am able to complete my work quickly using this system

20

6.15

1.18

Qual 5. I am able to efficiently complete my work using this system

20

6.15

0.99

Qual 6. I feel comfortable using this system

20

6.05

1.10

Qual 7. It was easy to learn to use this system

20

6.35

0.93

Qual 8. I believe I became productive quickly using this system

20

6.30

0.80

Qual 9. The system gives error messages that clearly tell me how to fix problems

4

4.75

2.63

Qual 10. Whenever I make a mistake using the system, I recover easily and quickly

6

5.33

2.25

Qual 11. The information (such as online help, on-screen messages, and other documentation) provided with this system is clear

8

5.13

1.96

Qual 12. It is easy to find the information I needed

19

6.00

1.29

Qual 13. The information provided for the system is easy to understand

19

6.32

0.82

Qual 14. The information is effective in helping me complete the tasks and scenarios

20

6.35

0.99

Qual 15. The organization of information on the system screens is clear

20

6.35

0.93

Qual 16. The interface of this system is pleasant

20

6.20

0.95

Qual 17. I like using the interface of this system

20

6.30

0.92

Qual 18. This system has all the functions and capabilities I expect it to have

19

5.00

1.29

Qual 19. Overall, I am satisfied with this system

20

6.15

0.93

Abbreviation: CSUQ, computer system usability questionnaire.


The response to the usability questions was positive with a mean response of 6.0 overall to all the statements. We also asked providers two open-ended questions: “Please comment on the visualization and whether it helped you in any way” and “do you have suggestions as to how to improve this visualization?”

Providers overwhelmingly agreed that the visualization helped them make better clinical decisions, though we did not define what that meant (mean response = 5.75). They disagreed that the visualization made their decision-making take longer (mean response = 2.45); in other words, according to the opinion participants, the visualization helped them make their decisions faster and helped them make better decisions ([Figs. 3] and [4]).

Zoom Image
Fig. 3 Perceived visualization impact on quality of decision-making.
Zoom Image
Fig. 4 Perceived visualization impact on time to decision.

Providers said that they liked seeing the information in the graph form and that it was helpful. Comments ranged from the straightforward “love it” and “graphs are great” to the more nuanced “implementation of visual representation of data is essential to quick and efficient decision-making,” and “it is a balance between too much or too little info; having the labs on it as well would be good, but you cannot put them all there.” The best praise was that which suggested that providers wanted to see the tool integrated into their system—“nice system would like to see implement(ed)…”

Participants identified the fact that some cases are more amenable to visualization than others. One person wrote “I think this visualization is useful for any medication regimen with variable dosing over time or for a class of medications where things can change over time (ex: OPAT abx [outpatient antibiotic therapy] course).” Similarly, another wrote, “steroid tapers are the worst thing to try to visualize ever, so this is very helpful. Otherwise antibiotic courses have a similar problem that could be addressed with similar visualizations.”

Constructive criticism included the fact that most patients in real life take more medications than the patients in our cases did and another said the limited data available were not “clinically realistic.”

When asked to comment on the visualization and whether it helped in any way, two providers volunteered that it could have prevented an error.

The 19 responses given are provided in full in [Supplementary Appendix B] (available in the online version).


#
#

Discussion

This study aimed to examine the impact of visualization on decision-making, using both qualitative and quantitative measurements. Though we did not show statistically significant differences between groups in this small sample, qualitative results strongly indicated that providers liked the visualizations, found them helpful, and thought they saved them time. Other findings included the challenge of designing cases for this kind of study; some additional factors that impact provider decision-making such as what other tests had been done already or whether it was their patient versus a patient who went to another PCP who they were covering for; positive feedback from physicians on utility, usability, and satisfaction; and ways to improve the design for future studies.

The novelty of this experiment was studying the intervention inside a simulated clinical workflow.

Visualization as Clinical Decision Support

Clinical decision support has a broad definition and can include anything from using information about the current clinical context to retrieve online documents—to providing patient-specific, situation-specific alerts, reminders, or order sets—to organizing information in ways that facilitate decision-making and action.[32] Our visualization could be considered part of decision-support in the third category: organizing already available information in a way that will better facilitate understanding relationships that are important for making decisions.


#

Specific Cases

The biggest impact was seen in case 3, where a new medication was started just as weight gain began. This case was perhaps the most successful as the visualization brought together information that is usually hard to see at once in the live EHR: medications with their start and end dates are found in one tab, discontinued medications are in another, and weights are in another, each with dates and no graph. The relationship between weight and medication start was much easier to see on the graph and no additional tabs needed to be selected. The other cases showed similar situations, but in some cases all respondents got the answer correct, making the impact of the visualization difficult to assess, while in others the free text suggested the visualization helped them see a relationship, but they had other clinical factors at play. For example, in the pilot, one provider said she would not start a new medication on a patient she was cross-covering, even though she thought it was the right thing to do. We changed the case for the full experiment to refer to a transferred patient rather than another provider's patient, but that highlights the challenge of designing cases where there is an absolute right answer clinically, biologically, and practically.


#

Positive and Constructive Feedback

Provider feedback was overwhelmingly positive. In the context of information overload and provider burnout, ways to improve provider satisfaction and easy their burden should be considered. The EHR is often considered a source of these issues; improving usability and design and reducing the work needed to find information can directly address some of these burning issues.

Providers gave many suggestions and ideas for future implementations, while demonstrating enthusiasm for the concept. A number suggested it would be helpful to have control over which variables would be displayed in the draft: “it would be even more powerful if the user could select which pieces of info were incorporated into the visualizations.” Many noted that real patients are on many more medications than those on our cases, but all medications were to be listed; the relationship between medications and results would be obscured. At the same time, selecting only those that we already know to have a relationship may overestimate the impact of this visualization compared with one dynamically generated in all cases. A future version of our tool could dynamically select what to display based on algorithms selecting the most important information.

Other helpful suggestions, such as use of color for normal/abnormal; the ability to zoom; and the inclusion of other important events in the patient history (like major diagnoses or change in status of a family member), will be considered for inclusion in future work.


#

Limitations

By studying participants making decisions in a more realistic clinical environment, the testing environment was more powerful and more realistic but also much noisier, with interruptions due to the clinical setting. And even though we attempted to mimic the real EHR, the simplified version meant there was less information overload and perhaps affected the ease of answering. Additionally, the small sample size was a limitation; we were not powered to show smaller changes.

Another challenge was case and question design. The goal was to create cases that had objectively correct next steps. However, any next steps that were too obvious would be expected to be answered correctly by all competent primary care providers. Despite pilot testing and consulting with experts, there seemed to be a ceiling effect making it hard to see the impact of the visualization, and a lower than expected correct response rate for the correct/incorrect questions.

Participants often responded that none of the answers were correct or that none matched what they would have done in a similar clinical situation. This variation is known in the literature[33] [34] [35] and can be attributed to many things including experience, place of training, current context, and knowledge.[36] [37] The percentage correct therefore is an imperfect measure. More pilot testing of questions before the next round of this work, as well as using thinking aloud to examine providers' cognitive processes could help with these limitations.[38]

Furthermore, choosing the right answer did not correlate with acknowledging seeing a trend. This supports the importance of asking about trend detection separately from asking about the right answers, but suggests that the specific question design or else the multiple-choice format did not sufficiently distinguish those who perceived a trend or relationship and those who did not.


#
#

Conclusion

Designers of electronic health records should consider the ways information presentation could affect decision-making. As trends and relationships can be perceived more easily in graphical format, some laboratory values and related data may benefit from visual representation.

We were able to simulate the EHR for a practice and involve more than 20 providers in a study testing cases with and without visualization. Different visualizations could be tested using this method to identify the one that leads to the best clinical decisions. While this design was small and did not show quantitative findings perhaps due to the method of assessment, this approach should be considered in a larger and more integrated way (perhaps even leading to A/B in a live EHR) to measure the impact on speed and quality of information retrieval and processing.

This study highlighted the challenges in the clinical setting, where context and provider preference affect decision-making, and sometimes even experts disagree about the next best step. More participants and more questions will be needed to confidently identify the quantitative impact of visualizations. However, there was broad excitement about and interest in the potential of visualization to display relationships and trends for the medical data of medically complex patients. Some physicians attributed noticing trends to the visualization in their feedback. The best visualization for decision-making is still unknown, but we can continue to work toward the best representation of the data we have for both providers and patients.


#
#

Conflict of interest

None declared.

Protection of Human and Animal Subjects

This research was reviewed and deemed exempt by the hospital's institutional review board.


Supplementary Material

  • References

  • 1 Reiser SJ. The clinical record in medicine. Part 1: Learning from cases. Ann Intern Med 1991; 114 (10) 902-907
  • 2 U.S. Department of Health & Human Services. Office of the Secretary. Health Information Technology: standards, implementation specifications, and certification criteria for electronic health record technology, 2014 edition; Revisions to the Permanent Certification Program for Health Information Technology. 45 CFR Part 170 RIN 0991–AB82. 2012 . Available at: http://www.gpo.gov/fdsys/pkg/FR-2012-09-04/pdf/2012-20982.pdf . Accessed January 24, 2020
  • 3 Jha AK, DesRoches CM, Kralovec PD, Joshi MS. A progress report on electronic health records in U.S. hospitals. Health Aff (Millwood) 2010; 29 (10) 1951-1957
  • 4 Jha AK, Ferris TG, Donelan K. , et al. How common are electronic health records in the United States? A summary of the evidence. Health Aff (Millwood) 2006; 25 (06) w496-w507
  • 5 Hsiao C-J, Hing E. NCHS data brief: use and characteristics of electronic health record systems among office-based physician practices: United States, 2001–2013. 2014 ; Number 143, January 2014: Available at: http://www.cdc.gov/nchs/data/databriefs/db143.htm . Accessed April, 2014
  • 6 Bleich HL, Slack WV. Reflections on electronic medical records: when doctors will use them and when they will not. Int J Med Inform 2010; 79 (01) 1-4
  • 7 Elting LS, Martin CG, Cantor SB, Rubenstein EB. Influence of data display formats on physician investigators' decisions to stop clinical trials: prospective trial with repeated measures. BMJ 1999; 318 (7197): 1527-1531
  • 8 Walker JM. Influence of data display formats on decisions to stop clinical trials. Paper is misleading, like a sheep dressed in a wolf's clothing. BMJ 1999; 319 (7216): 1070
  • 9 Marshall T, Mohammed MA, Rouse A. A randomized controlled trial of league tables and control charts as aids to health service decision-making. Int J Qual Health Care 2004; 16 (04) 309-315
  • 10 Tan JKH, Benbasat I. Processing of graphical information: a decomposition taxonomy to match data extraction tasks and graphical representations. Inf Syst Res 1990; 416-439 . Available at: http://connection.ebscohost.com/c/articles/4431032/processing-graphical-information-decomposition-taxonomy-match-data-extraction-tasks-graphical-representations
  • 11 Kumar N, Benbasat I. The effect of relationship encoding, task type, and complexity on information representation: an empirical evaluation of 2D and 3D line graphs. Manage Inf Syst Q 2004; 28 (02) 255-281
  • 12 Kim Y, Heer J. Assessing effects of task and data distribution on the effectiveness of visual encodings. Comput. Graph. Forum 2018; Available at: https://www.semanticscholar.org/paper/Assessing-Effects-of-Task-and-Data-Distribution-on-Kim-Heer/6979c6e6f385263cfd5dfc34d70e30dddd07778d . Accessed January 24, 2020
  • 13 Demiralp Ç, Bernstein MS, Heer J. Learning perceptual kernels for visualization design. IEEE Trans Vis Comput Graph 2014; 20 (12) 1933-1942
  • 14 Wu DTY, Chen AT, Manning JD. , et al. Evaluating visual analytics for health informatics applications: a systematic review from the American Medical Informatics Association Visual Analytics Working Group Task Force on Evaluation. J Am Med Inform Assoc 2019; 26 (04) 314-323
  • 15 Samal L, Wright A, Wong BT, Linder JA, Bates DW. Leveraging electronic health records to support chronic disease management: the need for temporal data views. Inform Prim Care 2011; 19 (02) 65-74
  • 16 Bauer DT, Guerlain S, Brown PJ. The design and evaluation of a graphical display for laboratory data. J Am Med Inform Assoc 2010; 17 (04) 416-424
  • 17 Torsvik T, Lillebo B, Mikkelsen G. Presentation of clinical laboratory results: an experimental comparison of four visualization techniques. J Am Med Inform Assoc 2013; 20 (02) 325-331
  • 18 Mishuris RG, Yoder J, Wilson D, Mann D. Integrating data from an online diabetes prevention program into an electronic health record and clinical workflow, a design phase usability study. BMC Med Inform Decis Mak 2016; 16: 88
  • 19 Plaisant C. The challenge of information visualization evaluation. Proceedings of the working conference on Advanced visual interfaces. Vol Gallipoli, Italy: ACM; 2004 . Available at: https://dl.acm.org/doi/10.1145/989863.989880 . Accessed January 24, 2020
  • 20 Friel SN, Curcio FR, Bright GW. Making sense of graphs: Critical factors influencing comprehension and instructional implications. J Res Math Educ 2001; 32 (02) 124-158
  • 21 Tufte ER. Beautiful Evidence. Vol 1. Cheshire, CT: Graphics Press; 2006
  • 22 Tufte ER, Graves-Morris PR. The Visual Display of Quantitative Information. Vol 2. Cheshire, CT: Graphics press; 1983
  • 23 Tufte ER, Robins D. Visual Explanations. Vol 25. Cheshire, CT: Graphics Press; 1997
  • 24 Few S. Information Dashboard Design. Sebastopol, CA: O'Reilly; 2006
  • 25 Few S. Now You See It: Simple Visualization Techniques for Quantitative Analysis. Oakland, CA: Analytics Press; 2009
  • 26 Few S. Show Me the Numbers: Designing Tables and Graphs to Enlighten. Vol 1. Oakland, CA: Analytics Press; 2004
  • 27 Google Developers. Google charts. Available at: https://google-developers.appspot.com/chart/interactive/docs/index . Accessed January 24, 2020
  • 28 Python Software Foundation. (2013). Python 2.7.5: Anaconda 1.8.0 (x86_64). Available at: http://www.python.org . Accessed January 24, 2020
  • 29 R Core Team. (2013). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Available at: http://www.R-project.org/ . ISBN 3–900051–07–0. Accessed January 24, 2020
  • 30 StataCorp. Stata Statistical Software: Release 11. College Station, TX: StataCorp LP; 2009
  • 31 Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum Comput Interact 1995; 7 (01) 57-78
  • 32 Shortliffe EH, Cimino JJ. Biomedical Informatics: Computer Applications in Health Care and Biomedicine. 4 ed. London: Springer; 2014
  • 33 Krein SL, Hofer TP, Kerr EA, Hayward RA. Whom should we profile? Examining diabetes care practice variation among primary care providers, provider groups, and health care facilities. Health Serv Res 2002; 37 (05) 1159-1180
  • 34 Brooks JM, Cook EA, Chapman CG. , et al. Geographic variation in statin use for complex acute myocardial infarction patients: evidence of effective care?. Med Care 2014; 52 (Suppl. 03) S37-S44
  • 35 Cabana MD, Rand CS, Powe NR. , et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282 (15) 1458-1465
  • 36 Elstein AS, Schwartz A. Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. BMJ 2002; 324 (7339): 729-732
  • 37 Patel VL, Groen GJ. Knowledge based solution strategies in medical reasoning. Cogn Sci 1986; 10 (01) 91-116
  • 38 Thyvalikakath TP, Dziabiak MP, Johnson R. , et al. Advancing cognitive engineering methods to support user interface design for electronic health records. Int J Med Inform 2014; 83 (04) 292-302

Address for correspondence

Shira H. Fischer, MD, PhD
RAND Corporation
20 Park Plaza, Suite 920, Boston, MA 02116
United States   

  • References

  • 1 Reiser SJ. The clinical record in medicine. Part 1: Learning from cases. Ann Intern Med 1991; 114 (10) 902-907
  • 2 U.S. Department of Health & Human Services. Office of the Secretary. Health Information Technology: standards, implementation specifications, and certification criteria for electronic health record technology, 2014 edition; Revisions to the Permanent Certification Program for Health Information Technology. 45 CFR Part 170 RIN 0991–AB82. 2012 . Available at: http://www.gpo.gov/fdsys/pkg/FR-2012-09-04/pdf/2012-20982.pdf . Accessed January 24, 2020
  • 3 Jha AK, DesRoches CM, Kralovec PD, Joshi MS. A progress report on electronic health records in U.S. hospitals. Health Aff (Millwood) 2010; 29 (10) 1951-1957
  • 4 Jha AK, Ferris TG, Donelan K. , et al. How common are electronic health records in the United States? A summary of the evidence. Health Aff (Millwood) 2006; 25 (06) w496-w507
  • 5 Hsiao C-J, Hing E. NCHS data brief: use and characteristics of electronic health record systems among office-based physician practices: United States, 2001–2013. 2014 ; Number 143, January 2014: Available at: http://www.cdc.gov/nchs/data/databriefs/db143.htm . Accessed April, 2014
  • 6 Bleich HL, Slack WV. Reflections on electronic medical records: when doctors will use them and when they will not. Int J Med Inform 2010; 79 (01) 1-4
  • 7 Elting LS, Martin CG, Cantor SB, Rubenstein EB. Influence of data display formats on physician investigators' decisions to stop clinical trials: prospective trial with repeated measures. BMJ 1999; 318 (7197): 1527-1531
  • 8 Walker JM. Influence of data display formats on decisions to stop clinical trials. Paper is misleading, like a sheep dressed in a wolf's clothing. BMJ 1999; 319 (7216): 1070
  • 9 Marshall T, Mohammed MA, Rouse A. A randomized controlled trial of league tables and control charts as aids to health service decision-making. Int J Qual Health Care 2004; 16 (04) 309-315
  • 10 Tan JKH, Benbasat I. Processing of graphical information: a decomposition taxonomy to match data extraction tasks and graphical representations. Inf Syst Res 1990; 416-439 . Available at: http://connection.ebscohost.com/c/articles/4431032/processing-graphical-information-decomposition-taxonomy-match-data-extraction-tasks-graphical-representations
  • 11 Kumar N, Benbasat I. The effect of relationship encoding, task type, and complexity on information representation: an empirical evaluation of 2D and 3D line graphs. Manage Inf Syst Q 2004; 28 (02) 255-281
  • 12 Kim Y, Heer J. Assessing effects of task and data distribution on the effectiveness of visual encodings. Comput. Graph. Forum 2018; Available at: https://www.semanticscholar.org/paper/Assessing-Effects-of-Task-and-Data-Distribution-on-Kim-Heer/6979c6e6f385263cfd5dfc34d70e30dddd07778d . Accessed January 24, 2020
  • 13 Demiralp Ç, Bernstein MS, Heer J. Learning perceptual kernels for visualization design. IEEE Trans Vis Comput Graph 2014; 20 (12) 1933-1942
  • 14 Wu DTY, Chen AT, Manning JD. , et al. Evaluating visual analytics for health informatics applications: a systematic review from the American Medical Informatics Association Visual Analytics Working Group Task Force on Evaluation. J Am Med Inform Assoc 2019; 26 (04) 314-323
  • 15 Samal L, Wright A, Wong BT, Linder JA, Bates DW. Leveraging electronic health records to support chronic disease management: the need for temporal data views. Inform Prim Care 2011; 19 (02) 65-74
  • 16 Bauer DT, Guerlain S, Brown PJ. The design and evaluation of a graphical display for laboratory data. J Am Med Inform Assoc 2010; 17 (04) 416-424
  • 17 Torsvik T, Lillebo B, Mikkelsen G. Presentation of clinical laboratory results: an experimental comparison of four visualization techniques. J Am Med Inform Assoc 2013; 20 (02) 325-331
  • 18 Mishuris RG, Yoder J, Wilson D, Mann D. Integrating data from an online diabetes prevention program into an electronic health record and clinical workflow, a design phase usability study. BMC Med Inform Decis Mak 2016; 16: 88
  • 19 Plaisant C. The challenge of information visualization evaluation. Proceedings of the working conference on Advanced visual interfaces. Vol Gallipoli, Italy: ACM; 2004 . Available at: https://dl.acm.org/doi/10.1145/989863.989880 . Accessed January 24, 2020
  • 20 Friel SN, Curcio FR, Bright GW. Making sense of graphs: Critical factors influencing comprehension and instructional implications. J Res Math Educ 2001; 32 (02) 124-158
  • 21 Tufte ER. Beautiful Evidence. Vol 1. Cheshire, CT: Graphics Press; 2006
  • 22 Tufte ER, Graves-Morris PR. The Visual Display of Quantitative Information. Vol 2. Cheshire, CT: Graphics press; 1983
  • 23 Tufte ER, Robins D. Visual Explanations. Vol 25. Cheshire, CT: Graphics Press; 1997
  • 24 Few S. Information Dashboard Design. Sebastopol, CA: O'Reilly; 2006
  • 25 Few S. Now You See It: Simple Visualization Techniques for Quantitative Analysis. Oakland, CA: Analytics Press; 2009
  • 26 Few S. Show Me the Numbers: Designing Tables and Graphs to Enlighten. Vol 1. Oakland, CA: Analytics Press; 2004
  • 27 Google Developers. Google charts. Available at: https://google-developers.appspot.com/chart/interactive/docs/index . Accessed January 24, 2020
  • 28 Python Software Foundation. (2013). Python 2.7.5: Anaconda 1.8.0 (x86_64). Available at: http://www.python.org . Accessed January 24, 2020
  • 29 R Core Team. (2013). R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Available at: http://www.R-project.org/ . ISBN 3–900051–07–0. Accessed January 24, 2020
  • 30 StataCorp. Stata Statistical Software: Release 11. College Station, TX: StataCorp LP; 2009
  • 31 Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int J Hum Comput Interact 1995; 7 (01) 57-78
  • 32 Shortliffe EH, Cimino JJ. Biomedical Informatics: Computer Applications in Health Care and Biomedicine. 4 ed. London: Springer; 2014
  • 33 Krein SL, Hofer TP, Kerr EA, Hayward RA. Whom should we profile? Examining diabetes care practice variation among primary care providers, provider groups, and health care facilities. Health Serv Res 2002; 37 (05) 1159-1180
  • 34 Brooks JM, Cook EA, Chapman CG. , et al. Geographic variation in statin use for complex acute myocardial infarction patients: evidence of effective care?. Med Care 2014; 52 (Suppl. 03) S37-S44
  • 35 Cabana MD, Rand CS, Powe NR. , et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282 (15) 1458-1465
  • 36 Elstein AS, Schwartz A. Clinical problem solving and diagnostic decision making: selective review of the cognitive literature. BMJ 2002; 324 (7339): 729-732
  • 37 Patel VL, Groen GJ. Knowledge based solution strategies in medical reasoning. Cogn Sci 1986; 10 (01) 91-116
  • 38 Thyvalikakath TP, Dziabiak MP, Johnson R. , et al. Advancing cognitive engineering methods to support user interface design for electronic health records. Int J Med Inform 2014; 83 (04) 292-302

Zoom Image
Fig. 1 (A) Face sheet with visualization. Face sheet for online medical record for patient with heart failure. Pink text indicates a link. Green button inside red circle added to EHR. Otherwise, this is similar to the current EHR in use. Patient name and data are fictitious. (B) Data tables: standard presentation of historical data. (C) Visualization for case 4: new display of same data. This visualization appears when green button clicked as well as on pages with historical data for these elements. It displays multiple indicators for heart failure patients on the same time scale: weight with medications and doses as well as hospitalizations. Beginning and ending numbers are printed; individual data points are indicated by small circles and the values can be seen by hovering over the circles. Exact dates for medications are also visible when hovering. EHR, electronic health record.
Zoom Image
Fig. 2 Usability and satisfaction. Questionnaire responses. Questions 9 to 11 were not relevant to our tool.
Zoom Image
Fig. 3 Perceived visualization impact on quality of decision-making.
Zoom Image
Fig. 4 Perceived visualization impact on time to decision.