Appl Clin Inform 2022; 13(05): 1015-1023
DOI: 10.1055/a-1942-6889
Research Article

Effect of Notes' Access and Complexity on OpenNotes' Utility

1   Industrial Engineering Department, School of Engineering, Mercer University, Macon, Georgia, United States
,
Ian Kratzke
2   Department of Surgery, School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
,
Karthik Adapa
3   Department of Radiation Oncology, School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
,
Lawrence Marks
3   Department of Radiation Oncology, School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
,
Lukasz Mazur
3   Department of Radiation Oncology, School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
› Author Affiliations
 

Abstract

Background Health care providers are now required to provide their patients access to their consultation and progress notes. Early research of this concept, known as “OpenNotes,” showed promising results in terms of provider acceptability and patient adoption, yet objective evaluations relating to patients' interactions with the notes are limited.

Objectives To assess the effect of the complexity level of notes and number of accesses (initial read vs. continuous access) on the user's performance, perceived usability, cognitive workload, and satisfaction with the notes.

Methods We used a 2*2 mixed subjects experimental design with two independent variables: (1) note's complexity at two levels (simple vs. complex) and (2) number of accesses to notes at two levels (initial vs. continuous). Fifty-three participants were randomly assigned to receive a simple versus complex radiation oncology clinical note and were tested on their performance for understanding the note content after an initial read, and then with continuous access to the note. Performance was quantified by comparing each participant's answers to the ones developed by the research team and assigning a score of 0 to 100 based on participants' understanding of the notes. Usability, cognitive workload, and satisfaction scores of the notes were quantified using validated tools.

Results Performance for understanding was significantly better in simple versus complex notes with continuous access (p = 0.001). Continuous access to the notes was also positively associated with satisfaction scores (p = 0.03). The overall perceived usability, cognitive workload, and satisfaction scores were considered low for both simple and complex notes.

Conclusion Simplifying notes can improve understanding of notes for patients/families. However, perceived usability, cognitive workload, and satisfaction with even the simplified notes were still low. To make notes more useful for patients and their families, there is a need for dramatic improvements to the overall usability and content of the notes.


#

Background and Significance

Electronic health records (EHRs) have now been widely adopted in the United States.[1] One potential advantage of EHRs is that patients can readily access their own health data.[2] Indeed, starting in 2021, government regulations require health care providers to provide patients access to all health information in their electronic medical records, including patient consultation and progress notes[3] (often referred to as “OpenNotes”). This requirement followed many studies suggesting that access to notes leads to patients reporting greater control of their care.[4] In addition, studies showed that patients claimed that providing access to consultation and progress notes helped to educate them and increases adherence to medications.[3] [4] [5]

Overall, most studies report that patients wish to continue to have access to their notes, and only a few patients report feeling confused, worried, or offended by the notes.[3] [4] [6] [7] [8] [9] [10] [11] For example, in one survey analysis of 96 oncology clinicians and 3,418 patients with a cancer diagnosis, 70% of clinicians and 98% of patients indicated that OpenNotes is a “good idea,”[6] with 44% of the clinicians indicating that patients would be confused by their notes.[6] [12] [13] In another survey study with 88 patients being seen in radiation oncology, 96% of patients reported that accessing notes improved their understanding of their condition, 94% reported an improved understating of side effects, and 96% felt more confident about their treatment. On the other hand, some patients reported being more worried (11%), getting more confused (6%), and regretted reading the notes (4%).[8]

Most of the prior studies report clinicians' and patients' subjective opinions about access to notes with no studies objectively measuring patient understanding of the notes. In addition, there is limited research on how the complexity of these notes and the time spent with the note would affect patients' performance, perceived usability, cognitive workload, and satisfaction with the notes. Thus, the aim of this work was to assess the effect of the complexity level of notes, and the impact of the degree of access to the note, on the user's performance for understanding the note content, perceived usability, cognitive workload, and satisfaction with the notes.


#

Methods

Participants

This study was completed by healthy volunteers, as surrogates for caregivers. We conducted a pilot analysis with 10 participants to calculate the sample size needed for this study. Our analysis suggested that ≈45 participants would be needed to detect statistical differences between the independent variables at the statistical power of 0.80 and the significance level of 0.05.[14] [15] Fifty-three participants completed this study (55% response rate); age range: 19 to 63, mean: 29.5, and standard deviation: 9.4. [Table 1] provides a summary of information about participants' demographics. Recruitment was done by sending emails to a variety of list serves at two large academic institution in the United States, and through Research for Me,[16] a platform that provides research participant recruitment services. Participants were compensated with a $10 gift card each upon completion of the study.

Table 1

Demographic information of the participants

Variable

Number

%

Gender

 Male

12

23

 Female

41

77

Race

 White/Caucasian

32

60

 Asian

8

15

 African American

4

8

 Hispanic/Latino

4

8

 Multiple

5

9

Prior experience with OpenNotes

 Yes

31

58

 No

22

42


#

Study Setting

We used a radiation oncology setting since the delivery of radiation therapy is often anxiety-provoking and the notes might thus be particularly reassuring (or worrisome) to patients. For the first independent variable (notes' complexity), an experienced radiation oncology provider wrote two versions of the same patient's note intended to represent “simple” versus “complex” versions (see [Supplementary Appendix A] [available in the online version] for the notes used in this study). The radiation oncology provider wrote a simple consultation note (the simple level), and then included additional information including more technical terminologies and details to make the notes more complex (the complex level). To quantify the simple versus complex note, two human factors engineering researchers, without any previous medical training, conducted a content analysis (i.e., a qualitative research approach commonly used to categorize qualitative data).[17] [18] [19] The researchers categorized and scored each sentence in the notes into (1) information known to the patient, (2) technical information, or (3) provider recommendations. Each researcher then coded each sentence in the notes as simple (if they could readily understand it) or complex (if they could not readily understand it). The researchers coded the notes independently and addressed any disagreements by consensus.[18] [20] For each of the notes, we calculated a complexity score based on the ratio of simple versus complex sentences. Prior to conducting the study, a few iterations of the notes were generated by the radiation oncology provider to achieve a difference in complexity level that was satisfactory to both researchers and the radiation oncology provider. The difference was considered satisfactory when the complex:simple ratios for the information known to patients and provider recommendation categories (i.e., nontechnical information categories) in the complex note were more than twice as high as those in the simple note (see [Table 2] for details and summary results of the content analysis).

Table 2

Content analysis results of the notes

Code

Simple OpenNotes

Complex:

simple ratio

Complex OpenNotes

Complex:

simple ratio

Information known by patient

Total: 48

0.26

Total: 81

0.6

 Number of complex sentences

10

30

 Number of simple sentences

38

51

Technical information

Total: 28

6

Total: 64

6.1

 Number of complex sentences

24

55

 Number of simple sentences

4

9

Provider recommendation

Total: 17

0.21

Total: 28

0.8

 Number of complex sentences

3

8

 Number of simple sentences

14

10

For the second independent variable (number of accesses to notes): we asked the participants to answer questions about their understanding of the notes twice: once after their initial reading of the note once, and again with continuous access to the notes. This helped us understand if performance was related merely to the participants reading the note once (and not being able to recall information) versus not being able to understand the information even with unlimited access to the note.


#

Experimental Design

We used a 2*2 mixed subjects experimental design with the two independent variables: (1) note's complexity at two levels (simple vs. complex), and (2) the number of accesses to notes at two levels (initial vs. continuous). While controlling for prior experience with reading clinical notes, participants were randomly assigned to one of two conditions: simple versus complex notes. Participants were instructed to read the assigned notes as if they were a patient's family member (caregiver). Then, the participants read the notes once (initial) and answered “performance” questions assessing their understanding of the notes without being able to go back and read the notes again. Then, the participants were provided continuous access to the note and were again asked the same series of performance questions. After completing each performance evaluation (initial vs. continuous), participants completed validated questionnaires measuring the perceived usability, cognitive workload, and satisfaction.


#

Performance

Performance was quantified by comparing each participant's answers to the performance questionnaire (see [Supplementary Appendix B], available in the online version) and scored as the percent of “correct answers” (as developed by the research team; thus in a range of 0 to 100%). Therefore, correctly answering the performance questions would reflect higher understanding of the notes. In addition, we conducted a subanalysis on four of the performance questions that were considered clinically critical (physical exam key findings, recommended treatment options, recommended medications, and specialist referral).


#

Perceived Usability

Perceived usability was quantified using the Systems Usability Scale (SUS).[21] [22] [23] SUS is a valid, reliable, and most commonly used tool to measure usability of patient-facing interfaces.[24] [25] SUS is a 10-item questionnaire, with a five-point rating scale for each item ranging from strongly disagree to strongly agree. The outcome is a 0 [low] to 100 [high] score calculated based on user's rating of the 10 items with higher scores indicating better perceived usability.[26] [27]


#

Cognitive Workload

Cognitive workload was quantified using the National Aeronautical and Space Administration's Task Load Index (NASA-TLX),[28] a valid and reliable subjective measure. The NASA-TLX questionnaire evaluates cognitive workload using six dimensions (mental demand, physical demand, temporal demand, frustration, effort, and performance). It provides a global cognitive workload score from 0 [low] to 100 [high].


#

Satisfaction

Satisfaction was quantified using a slightly modified version of a previously developed survey.[29] Participants rated their satisfaction with the information in the notes, the time required to read the notes, language used in the notes, and the overall design of the notes using a 5-point Likert scale (see [Supplementary Appendix C] [available in the online version] for the satisfaction survey used in this study). Total satisfaction scores were calculated by averaging scores of items for satisfaction with the information in the notes, the time required to read the notes, language used in the notes, and the overall design.


#
#

Results

Data Analysis

IBM SPSS Statistics 28.0.1.0 was used to analyze the data. We conducted multiple mixed-subject ANOVAs (analyses of variance) to determine the effect of the independent variables (simple vs. complex note [between subjects] and initial vs. continuous access to the note [within-subject]) on performance, usability, cognitive workload, and satisfaction. The z-scores from the skewness and kurtosis were used to check for normality of data. Mauchly's test of sphericity was used to test the assumption of sphericity. Least significant difference adjustments were applied to test simple main effects at a statistical significance of p < 0.05. All simple pairwise comparisons were evaluated at an alpha level of 0.05. We included the demographics variables (e.g., age, education, and previous experience with reading clinical notes) in the analysis as covariates to count for their effect on the dependent variables. However, none of the covariates showed a statistically significant effect. A summary of the findings is provided in [Table 3].

Table 3

Means and SDs for all measures

Measure

Access to notes

Simple notes

Complex notes

Mean

SD

Mean

SD

Performance

Initial

48.65

3.16

44.70

3.22

Continuous

74.88

3.16

53.50

3.22

Usability (SUS)

Initial

49.50

4.04

41.71

4.12

Continuous

53.5

4.15

43.21

4.24

Workload (NASA-TLX)

Initial

56.30

4.27

57.06

4.93

Continuous

51.78

4.11

55.60

4.75

Satisfaction

Initial

2.99

0.21

2.96

0.22

Continuous

3.18

0.24

3.2

0.24

Abbreviations: NASA-TLX, National Aeronautical and Space Administration's Task Load Index; SD, standard deviation; SUS, Systems Usability Scale.



#

Performance

Participants randomized to the simple notes with continuous access to the notes had better performance when compared to all the other experimental conditions (F (1, 50) = 11.78, p = 0.001, η2  = 0.19; [Fig. 1]). Descriptive statistics are provided in [Table 3]. The same result was seen when the analysis was limited to the four clinically critical questions (F (1, 50) = 6.16, p = 0.017, η2  = 0.11; [Fig. 2]).

Zoom Image
Fig. 1 Effect of access to notes and notes' complexity on performance.
Zoom Image
Fig. 2 Effect of access to notes and notes' complexity on performance (four items only).

#

Perceived Usability

There were no significant results associated with perceived usability (SUS scores; p > 0.05; [Table 3]).


#

Cognitive Workload

There were no significant results associated with cognitive workload (p > 0.05; [Table 3]), including analysis of individual dimensions of the NASA-TLX. Descriptive statistics of the combined data are provided in [Fig. 3] and [Table 4]. The results are combined since there were no significant differences.

Table 4

Means and SDs of cognitive workload scores (NASA-TLX) for combined results

Workload dimension

Performance

Mental demand

Physical demand

Temporal demand

Frustration

Effort

Overall workload

Mean

46.70

64.51

9.27

28.04

43.34

55.53

55.20

SD

26.10

22.19

15.00

22.18

29.13

23.63

18.32

Abbreviations: NASA-TLX, National Aeronautical and Space Administration's Task Load Index; SD, standard deviation.


Zoom Image
Fig. 3 Cognitive workload scores (NASA-TLX) for combined results. NASA-TLX, National Aeronautical and Space Administration's Task Load Index.

#

Satisfaction

Participants indicated higher satisfaction when they had continuous access to the notes (M = 3.2) than when they did not (M = 2.9) with a significant mean difference of 0.24 (95% confidence interval: 0.06–0.36), p = 0.008 (F (1, 50) = 0.11, p = 0.74, η2  = .002; [Table 5]). The analysis of individual satisfaction items showed no statistically significance. Descriptive statistics of the combined data are provided in [Fig. 4] and [Table 5]. The results are combined since there were no significant differences.

Zoom Image
Fig. 4 Satisfaction scores for combined results.
Table 5

Means and SDs for satisfaction items for combined results

Satisfaction items

Information

Time

Language

Design

Overall satisfaction

Mean

3.06

3.60

2.89

2.78

3.09

SD

1.38

1.35

1.41

1.37

1.19

Abbreviation: SD, standard deviation.



#
#

Discussion

Overall, our results suggest that performance was better with the simple note and with continuous access. Continuous access to the notes was also positively associated with better satisfaction. The overall perceived usability, cognitive workload, and satisfaction scores were low. None of the covariates, including participant's previous experience with reading clinical notes, showed a statistically significant effect. This could be due to the fact that our means to define “experience” is imperfect, and, even if participants had experience with reading clinical notes in other fields, this may not have assisted in their understanding of a radiation oncology note specifically. In the subsections below, we discuss each dependent variable.

Performance

Participants' performance (i.e., understanding) was best with continuous access to simple notes. This is most likely due to the participants being able to go back to the notes and find answers to the performance questions. Results also suggest that participants were not able to recall even the clinically most critical information, which might be somewhat worrisome.[30] Even with continuous access to simple notes, performance was relatively low (overall ≈75% of questions were answered correctly), and no participants answered all questions correctly. Overall, our findings suggest that concerted efforts are needed to further simplify and improve usability of the notes. This might also suggest that, at least for some patients, it might not be practical to expect complete understanding via the note alone, and might potentially limit the ultimate utility of OpenNotes.[31] [32] [33] [34]


#

Perceived Usability

There were no statistical differences in usability between simple and complex notes, nor in the initial vs. continuous access. In all groups, the overall mean of perceived usability scores was relatively very low, i.e., ≈50–54 on the 100-point scale.[35] Thus, simplifying the notes' content in this study was insufficient to show an improvement in perceived usability. In the literature, the average usability score is 68[36] and it is a common practice in the human–computer interaction field to consider any score below this average as unsatisfactory. Thus, usability of notes needs further improvements.


#

Cognitive Workload

Participants perceived that reading the notes imposes a high cognitive workload, i.e., higher than what is considered “optimal” in the literature, especially for tasks with very low physical demands such as reading.[37] This could be due to the high use of medical jargon in the notes which adds burden onto the user, though the simple note was written explicitly aiming to reduce jargon. Thus, techniques to further reduce cognitive workload during interactions with notes are needed.


#

Satisfaction

Participants were slightly more satisfied when they had continuous access to the notes. However, satisfaction scores were generally low in both experimental conditions. Ideally, satisfaction scores need to be 4 and above (i.e., satisfied or extremely satisfied). While multiple studies report that patients were highly satisfied to receive access to their notes,[4] [8] [10] [31] [38] [39] [40] [41] these conclusions were generally based on a broad subjective question, rather than a formal tool designed to assess satisfaction. Our findings suggest that participants are not satisfied with the content of the notes (information, time to read, language used, and design). Thus, strategies to further improve satisfaction with the notes are needed.


#

Limitations of Findings

There are a number of limitations that prevent the generalizability of our findings. First, this was a remote study without a moderator present while the participants completed the study. Thus, we asked participants to conduct this study in a setting with limited exposure to distractions and interruptions, though compliance cannot be confirmed. Second, recruitment was done through an online platform. Thus, our sample consisted of those who are familiar and comfortable with using online tools and therefore their perception of reading provider's notes could be different from those who do not use technology frequently. Third, participants were asked to assume they were the caregiver of a patient, but they had no background information about the patient. In the real world, participants would have known the patient to at least some degree, and thus would be more familiar with their medical history. To help with this, we used relatively generic notes and standardized the notes among all participants. Fourth, the comparisons between the initial versus continuous access to the notes are not randomized (as it is not possible to randomize this variable), and the subjects knew the performance for understanding questions at the time when they had continuous access to the notes (since they had just done the assessments after their initial reading of the note). Despite this, performance for understanding was low even in the continuous access setting. Finally, we used only radiation oncology notes, making our findings perhaps most applicable to this particular clinical domain. Findings from this study may or may not be generalizable to other fields.


#
#

Conclusion

While participants randomized to the simple notes with continues access to the notes performed better than the participants randomized to the complex notes, their understanding, perceived usability, cognitive workload, and satisfaction with the notes were still low. While patients and their families have a strong interest in accessing their clinical notes, and the majority of patients expect meaningful benefits from reading the notes, our results suggest that to make the notes useful there is a need for dramatic improvements to the usability and content of the notes. Doing so might facilitate the use of clinical notes to also serve as a means to provide instructions and resources for patients (and families).


#

Clinical Reference Statement

This study suggests that the current way of writing clinical notes does not meet patient's need. The current version of the clinical notes is associated with low performance (understanding of the notes), usability, workload, and satisfaction. Thus, this could potentially lead patient to misinterpret the information in the notes and make poor decisions regarding their health. In order to improve patient engagement and decision making by providing patients access to their clinical notes, the usability and content of the notes must be improved to better address patients' needs.


#

Multiple choice questions

  1. When patients read clinical notes, which of the following has an impact on performance (understanding the content of the notes)?

    • Notes' complexity (simple vs. complex)

    • Access to notes (initial vs. continuous)

    • Both notes' complexity and access to notes

    • None

    Correct Answer: The correct answer is option c. Participants' performance (i.e., understanding) was best with continuous access and simple notes.

  2. When patients read clinical notes, which of the following has an impact on their cognitive workload?

    • Notes' complexity (simple vs. complex)

    • Access to notes (initial vs. continuous)

    • Both notes' complexity and access to notes

    • None

    Correct Answer: The correct answer is option d. None of these variables have significant effect on patients' cognitive workload. All participants reported high cognitive workload no matter which condition they were in.


#
#

Conflict of Interest

None declared.

Protection of Human and Animal Subjects

Participation was voluntary and does not pose undue risk. All human participants read and signed the informed consent form and all needed information was given to them when deciding whether to participate in the study. This was a remote unmoderated study and was fully implemented in Qualtrics.[42] The study protocol was reviewed and approved by the University of North Carolina at Chapel Hill Institutional Review Board (IRB) under reference ID: 338124.


Supplementary Material


Address for correspondence

Amro Khasawneh, PhD
Industrial Engineering Department, School of Engineering, Mercer University
Macon, Georgia 31207
United States   

Publication History

Received: 09 June 2022

Accepted: 11 September 2022

Accepted Manuscript online:
14 September 2022

Article published online:
26 October 2022

© 2022. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom Image
Fig. 1 Effect of access to notes and notes' complexity on performance.
Zoom Image
Fig. 2 Effect of access to notes and notes' complexity on performance (four items only).
Zoom Image
Fig. 3 Cognitive workload scores (NASA-TLX) for combined results. NASA-TLX, National Aeronautical and Space Administration's Task Load Index.
Zoom Image
Fig. 4 Satisfaction scores for combined results.