Subscribe to RSS
DOI: 10.1055/a-2008-4036
A RE-AIM Evaluation of a Visualization-Based Electronic Patient-Reported Outcome System
- Abstract
- Background and Significance
- Objectives
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Objectives Health care systems are primarily collecting patient-reported outcomes (PROs) for research and clinical care using proprietary, institution- and disease-specific tools for remote assessment. The purpose of this study was to conduct a Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) evaluation of a scalable electronic PRO (ePRO) reporting and visualization system in a single-arm study.
Methods The “mi.symptoms” ePRO system was designed using gerontechnological design principles to ensure high usability among older adults. The system enables longitudinal reporting of disease-agnostic ePROs and includes patient-facing PRO visualizations. We conducted an evaluation of the implementation of the system guided by the RE-AIM framework. Quantitative data were analyzed using basic descriptive statistics, and qualitative data were analyzed using directed content analysis.
Results Reach—the total reach of the study was 70 participants (median age: 69, 31% female, 17% Black or African American, 27% reported not having enough financial resources). Effectiveness—half (51%) of participants completed the 2-week follow-up survey and 36% completed all follow-up surveys. Adoption—the desire for increased self-knowledge, the value of tracking symptoms, and altruism motivated participants to adopt the tool. Implementation—the predisposing factor was access to, and comfort with, computers. Three enabling factors were incorporation into routines, multimodal nudges, and ease of use. Maintenance—reinforcing factors were perceived usefulness of viewing symptom reports with the tool and understanding the value of sustained symptom tracking in general.
Conclusion Challenges in ePRO reporting, particularly sustained patient engagement, remain. Nonetheless, freely available, scalable, disease-agnostic systems may pave the road toward inclusion of a more diverse range of health systems and patients in ePRO collection and use.
#
Background and Significance
Health care systems and professionals need efficient, patient-centered tools to support the collection of patient-reported outcomes (PROs) for research and clinical care. The Meaningful Use federal financial incentive legislation now requires collection of PROs as quality measures.[1] [2] Clinicians are increasingly supportive of incorporating the measurement of PROs into their clinical practice[3] because they improve the ability to detect and measure changes in patient symptoms and quality of life.[4] [5] Patients are also supportive; they report that PRO data provide them with an indicator of their current health status, such as symptoms, functioning, and quality of life, which can be tracked over time.[1] [6]
Understanding changes in PROs, such as gradually worsening fatigue, can raise patient awareness of insidious changes in health status that are otherwise difficult to quantify, and can facilitate patient–provider communication to guide treatment decisions aligned with patients' goals of care.[1] [6] This is particularly pertinent for patients with advanced cardiac conditions, such as heart failure (HF), in whom changes in symptoms need to be responded to immediately to effectively guide clinical management; in fact, for HF patients, self-management has been demonstrated to reduce the odds of hospitalization by up to 20%.[7] The ability to capture PROs electronically (electronic PROs [ePROs]) offers added capabilities including more streamlined data collection, aggregation, visualization, and sharing with parties involved in the patient's care compared with paper-based reporting.[8] [9] [10]
Most ePRO systems were developed by individual institutions or research groups for specific clinical use cases and are customized to the specifications of their technical infrastructure.[11] For example, systems such as “eSyM,” an electronic health record (EHR-integrated PRO platform launched across six health systems, supports PRO reporting in cancer.[12] Additionally, several vendors offer generic ePRO platforms which allow customization to specific institutions and clinical domains. For example, Epic's patient portal, MyChart, enables practices to collect specific PROs from their patients.[13] However, regardless of the platform, building, customizing, and maintaining ePRO systems require significant amounts of time, money, and resources. While larger academic health systems may be able to devote the required time and financial resources to developing customized ePRO solutions, most smaller hospitals and practices do not have the resources, thus precluding many from being able to electronically collect PROs at all.
Additionally, most of the health systems that support ePRO collection do not return PRO data back to patients in meaningful ways. In most cases, when longitudinal PRO data are returned to patients, it is also done so in formats that do not take into consideration low population graph literacy (40% in the United States)[14] and numeracy skills, especially among hospitalized older adults.[15] Most PRO scales are complicated to score and interpret, and may require knowledge of statistical concepts such as T-scores that many patients do not have. Therefore, patients need additional support when interpreting PRO data. Studies have demonstrated that data visualizations improve interpretation and contextualization of information for patients.[16] [17]
A scalable, customizable technical framework for the remote assessment of ePROs, which includes data sharing with patients in comprehensible formats, has the potential to democratize the collection and use of ePROs in health care to include smaller health systems and practices, and a wider range of patients. We developed an ePRO reporting and visualization system with the potential to scale in future work using Research Electronic Data Capture (REDCap) survey administration and database management software. This system is unique because it leverages the scalability of REDCap, which is used by thousands of institutions worldwide,[18] together with customized functionality enabled through Application Programming Interfaces (APIs) and built-in functions and modules.
#
Objectives
The objective of this study was to report on the implementation of an ePRO system that we conducted with 70 older adults with HF. This evaluation was guided by a widely used implementation science framework, the Reach, Efficacy, Adoption, Implementation, and Maintenance (RE-AIM) framework.
#
Methods
Description of the System
The novel “mi.symptoms” ePRO system was developed based on gerontechnological design principles and modified across several iterations through usability testing with patients, described in [Supplementary Material S1] (available in the online version). Gerontechnological design principles aim to make technology more usable specifically for older adults by leveraging research on the biological, psychological, social, and medical aspects of aging and incorporating them into design considerations.[19] Prior to implementing the system in this study, we also conducted a heuristic evaluation of the ePRO system with three expert evaluators following established methods,[20] and modified “mi.Symptoms” based on heuristic violations ([Supplementary Material S2], available in the online version). The ePRO system has two main components to support longitudinal PRO reporting and PRO visualization ([Fig. 1]).
#
Patient-Reported Outcome Reporting
PRO reporting refers to participants self-reporting on their health status via PROs to clinicians and researchers. Specifically, the system collects Patient-Reported Outcomes Measurement Information System (PROMIS) measures, which are a standardized, disease-agnostic set of PROs that measure several dimensions of physical, mental, and social health in adults and children. During this step of the system, a new record is generated in REDCap when a patient completes PRO surveys for the first time, which assigns a patient-specific identifier. The identifier is used for longitudinal data collection. After the patient completes an initial set of surveys, REDCap automatically sends invitations to complete a new set of surveys at specified intervals for a specified period. Each new set of surveys is logged under the same patient's record to ensure longitudinal data are connected.
Patients may select whether they prefer to receive follow-up surveys via email or SMS text message. We intentionally gave participants this choice because reports suggest that over 90% of U.S. adults have an email address[21] and 97% have a cellphone[22]; thus, we hoped to capture the majority of potentially eligible participants with one modality or the other. Participants may enter the email address or cell phone number of a caregiver if preferred. Emails are sent directly by REDCap. SMS text messages are sent using Twilio, a secure communication service which connects to REDCap through an API. Participants' phone numbers and text messages do not get permanently logged on Twilio's servers but instead remain securely in REDCap for security and privacy reasons (e.g., the Health Insurance Portability and Accountability Act, or HIPAA). Automated reminders are sent to patients who do not complete the survey within 2 days. The content of the emails and text messages can be customized to a specific project and translated into different languages as needed.
Patient-Reported Outcome Visualization
Immediately upon completion of the series of PRO surveys, patients are automatically redirected to a webpage external to REDCap displaying a report containing the results of the PRO surveys just completed, which are compared with the previous results from the same surveys. A snapshot of the report is shown in [Fig. 2] and a complete mock report is shown in [Supplementary Material S1] (available in the online version). A PDF copy of the report is automatically sent to the patient's email address on file for their records. The report uses visual analogies to display symptoms over time which we have previously evaluated and validated with patients.[15] Participants can jump to view specific symptoms using a menu, quickly compare symptom severity using a colored bar graph, and share a PDF of the report with caregivers or their care team via email. Patients may click an information icon next to each symptom, which takes them to web-based educational resources about the symptoms developed by the Mayo Clinic, the American Heart Association, and the American Psychological Association ([Supplementary Material S1], available in the online version).
The report was developed in close collaboration with developers from our institution's REDCap team within our Clinical and Translational Sciences Center. The team used visualizations developed by a professional graphic designer. They used a responsive web design to accommodate different screen sizes and orientations on which patients may view the report. They were also provided a file mapping PROMIS T-scores to base-10 scores and corresponding levels of symptom severity (high, medium, low), which was developed based on minimally important differences for this patient population as previously reported.[15] The report does not include any patient identifiers and no data are stored on the page itself; rather, it serves a shell displaying data that are stored in the REDCap database. This is intended to maintain patient privacy and allow patients to access the report without a login, because logins are shown to reduce patient adoption and sustained engagement with technology.[23]
#
#
Study Design
Overall design: we implemented an ePRO system from January 2020 through June 2021 with adults with HF recruited from NewYork-Presbyterian (NYP) hospital. We then conducted a mixed-methods evaluation study following the RE-AIM framework, which guides evaluation of intervention implementations.[24]
Eligibility criteria: patients with a diagnosis of HF confirmed clinically by a HF cardiologist based on clinical exam, laboratory parameters, and diagnostic testing including echocardiography, able to read and speak English, age 21 years and older, reachable by telephone, email, or text message, and willing and able to provide informed consent were eligible for inclusion in the study. Patients were excluded if they had severe cognitive impairment or a major psychiatric illness or concomitant terminal illness that would preclude participation. The study protocol was approved by the Weill Cornell Medicine Institutional Review Board.
RE-AIM data sources: data sources included quantitative survey data from 70 patients who participated in the study and qualitative data on a stratified sub-sample of 22 patients collected through semi-structured interviews. “Reach” was measured by examining the characteristics and symptoms of the patients who completed the study. “Effectiveness” was measured by examining the ability of patients to engage with the tool over time, defined as the number of weeks patients reported data during the 8-week implementation period. We evaluated “adoption,” “implementation,” and “maintenance” through semi-structured interviews guided by an interview guide with questions specifically targeting each construct.
Procedures: eligible patients were identified via the institutional EHRs and approached during hospitalizations in inpatient cardiology units and following visits at outpatient HF practices at NYP in close collaboration with physicians in these locations. Patients who agreed to participate in the study signed an e-Consent document in REDCap and completed baseline questions about demographic characteristics, technology experience, health literacy measured using the brief health literacy screener,[25] and PROMIS measures. Patients specified their preferred method of follow-up (email or text message) and provided email addresses or phone numbers.
After enrollment in the study, automated email or text message alerts were sent 2, 4, 6, and 8 weeks after enrollment inviting participants to complete follow-up surveys. Automated reminder emails or text messages were sent 2 days after the initial alerts if participants did not complete a survey. Recognizing that automated emails occasionally filtered into spam folders in patient inboxes, a research coordinator followed up with participants who did not complete follow-up surveys after 1 week, first through a personalized email and then by phone. The research coordinator provided technical assistance and offered to complete surveys by phone. At study completion, a stratified sample of patients was invited to participate in 30-minute qualitative exit interviews by phone to discuss their experiences using the ePRO system. We intentionally stratified the recruitment of participants based on their completion rates to collect a range of perspectives. All participants received a $25 gift card for their time and effort.
All quantitative data were summarized using standard descriptive statistics of frequency, mean, and central tendency in R statistical software. Qualitative data were coded in Dedoose using directed content analysis[26] by two independent coders who were PhD-prepared researchers with cardiac nursing expertise. Directed content analysis is a qualitative method that uses a predetermined framework, such as RE-AIM, to guide the analysis.[27] We undertook multiple steps to ensure rigor in qualitative research, including collaborative coding and maintaining audit trails.[28] Following this approach, we created a preliminary codebook based on the adoption, implementation, and maintenance constructs. Two coders (M.R.T. and S.M.) met three times for 1-hour sessions to collaboratively code transcripts using the codebook, adding new sub-themes as they emerged. They then completed all remaining analyses independently and met weekly to compare emerging results and resolve coding discrepancies through discussion. Memos of the evolving analysis were maintained throughout the process. Memos and evolving themes were shared with the study principal investigator (R.M.C.) and discussed until consensus on the final set of results was reached.
#
#
Results
Reach
Out of the 70 participants, the median age was 69 years, one-third identified as female, 17% as Black or African American, 11% as Hispanic/Latino, and nearly one-third reported not having enough financial resources ([Table 1]). More than half self-identified as having a disability of any kind (including but not limited to vision, hearing, or mobility), and nearly half had inadequate health literacy. At baseline participants reported the greatest burden of low physical function, fatigue, pain, and anxiety ([Supplementary Material S3], available in the online version).
Study participants (n = 70) |
Exit interview participants (n = 22) |
|
---|---|---|
Age |
69.0 (56.5–75.5) |
72.0 (56.3–75.0) |
Female gender |
22 (31%) |
6 (27%) |
Race |
||
White |
50 (71%) |
15 (68%) |
Black or African American |
12 (17%) |
4 (17%) |
Unsure/prefer not to answer |
4 (6%) |
0 (0%) |
Asian |
2 (4%) |
1 (5%) |
Native American |
1 (1%) |
1 (5%) |
Mixed |
1 (1%) |
1 (5%) |
Ethnicity |
||
Hispanic/Latino |
8 (11%) |
3 (14%) |
Not Hispanic/Latino |
55 (79%) |
17 (76%) |
Unsure/prefer not to answer |
7 (10%) |
2 (10%) |
Financial resources |
||
More than enough |
14 (20%) |
5 (23%) |
Enough |
37 (53%) |
12 (54%) |
Not enough |
19 (27%) |
5 (23%) |
Education |
||
High school or less |
14 (20%) |
1 (5%) |
College |
35 (50%) |
10 (45%) |
Graduate |
21 (30%) |
11 (50%) |
Self-reported disability[a] |
38 (54%) |
12 (54%) |
Adequate health literacy[b] |
39 (56%) |
16 (73%) |
Possess email address |
63 (90%) |
22 (100%) |
Hold Medicaid insurance |
10 (14%) |
3 (14%) |
Abbreviation: IQR, interquartile range.
a Includes any disability self-identified by the participants including mobility, hearing, vision, or other type of disability.
b Measured using the brief health literacy screener.[25]
#
Effectiveness
Approximately half (51%) of participants completed the 2-week follow-up survey, 43% completed the 4-week surveys, 40% the 6-week surveys, and 36% completed the 8-week surveys ([Table 2]). We examined additional characteristics of survey completion to gain a more in-depth understanding of engagement with the tool. At baseline, 50 (71%) participants opted for email notifications to complete follow-up surveys and 20 (29%) opted for text-message notifications. More participants who received email notifications completed surveys over the follow-up period compared with participants receiving text messages. Specifically, 30 out of 36 (83%), 25 out of 30 (83%), 24 out of 28 (86%), and 22 out of 25 (88%) participants who completed follow-up surveys at 2, 4, 6, and 8 weeks, respectively, had received reminders via email versus text messages. The median age of participants completing follow-up surveys was lower than the median age of all participants enrolled in the study. Participants did not differ by other demographic characteristics or health literacy.
Abbreviation: IQR, interquartile range.
Note: Percentages calculated based on participants who completed surveys at this time point.
Most participants completed surveys within 1 day of receiving notifications in the beginning of the follow-up period, but this response slightly slowed toward the end of the follow-up period. The median duration of survey completion time increased throughout the study, from a median of 6 minutes at the 2-week follow-up to 12 minutes at the 8-week follow-up. Throughout the study, most participants completed surveys in the afternoon, between 12:00 and 6:00 p.m.
#
Adoption
The adoption, implementation, and maintenance constructs were measured through qualitative, semi-structured interviews with a stratified sample of 22 participants after 8 weeks of PRO symptom reporting ([Table 1]). The median age of interview participants was 72 years, and a higher proportion of interview participants held graduate degrees (50%) compared with the overall sample of study participants (30%). Although we attempted to sample a range of participants based on survey completion, the majority of participants whom we were able to contact and who agreed to an interview were higher completers. Specifically, 20 (91%) completed 2-week surveys, and 18 (82%) completed surveys at 4, 6, and 8 weeks. Demographic characteristics were otherwise similar to the larger study sample except for education. [Supplementary Material S3] (available in the online version) contains additional illustrative quotes for each of the themes described below.
Participants listed several reasons for adopting the intervention. A few participants described a desire for increased self-knowledge, to “be more aware of what's going on in my health (participant 32).” Some participants also recognized the value in tracking symptoms. By far the greatest motivation for adoption was altruistic; several participants expressed a desire to contribute data that might improve knowledge for other patients like them in the future: “I appreciate giving the opportunity to help in research…to see if I can be of help to anyone else with the symptoms that I have; the condition that I have (participant 40).”
#
Implementation
Factors supporting implementation were categorized as either predisposing or enabling implementation. The predisposing factor was access to, and comfort with, computers. Most participants reported preferring to use the system on the computer versus a smartphone, and some participants expressed resistance to using smartphones in general: “No, I like the computer. I use the cellphone only if there is no other way to do things (participant 15).”
Three enabling factors were incorporation into routines, multimodal nudges, and ease of use. Participants reported that it was easiest to complete surveys when they were incorporated into their existing routines, for example at times that they would typically be on their computer anyway: “I think I did not have a specified time when I did them. It was kind of when I was on my laptop checking emails or I had a spare minute I would complete it then (participant 15).”
Many participants reported that the biggest barrier to completing surveys was simply forgetting to complete them. Therefore, nudges utilizing multiple approaches such as email reminders, text message reminders, and reminders from family and caregivers (multimodal nudges) were reportedly most effective in reminding them to complete surveys. One participant stated: “Well, a phone call to remind me to do it might be good. There was sometimes when I saw it in the email and went on to do something else and would almost forget. My wife is here. She reminds me (participant 40).” Multimodal nudges were also helpful because several participants noted issues with the email reminder notifications; some noted difficulty distinguishing emails for different follow-up time points while others suspected that emails may have been filtered into spam folders, which would have required effort to identify: “I guess it was spam emails. Depending how I feel, I don't go through my emails every day (participant 32).”
Ease of use was another important enabling factor. Almost no patients reported technical issues with the reporting system itself: “It was pretty straightforward. It's almost sort of idiot proof, okay. It's not really hard to do. Yes, you sit there and the instructions are fairly clear with the questions, and the answering process is not terribly complicated (participant 34).” A few participants reported that the length of the surveys was challenging, however, which creates barriers to completion: “If it were simpler and shorter, it would always be better (participant 42).”
#
Maintenance
We identified two reinforcing factors that supported maintenance of the intervention: perceived usefulness of viewing symptom reports with the tool and understanding the value of sustained symptom tracking in general. The majority of patients found the symptom reports helpful and wanted to keep them as records of their current health status. One participant stated: “I usually keep copies of everything, and preferably a hard copy or PDF copy, would be beneficial....every so often, I go back and look at these things just to see, you know, what was it back then? (participant 39).” Some participants reported openness to sharing reports when there was a clear clinical rationale.
Moreover, participants reported that the visualizations concretized abstract symptoms into an image that reflected their lived experience at a specific point in time: “Something tangible that I can look at (participant 21).” For many, the visualizations provided feedback they used to either validate how they were feeling or encourage positive behavior change. However, others noted discordance between their symptom perceptions and the report summaries, which caused distrust. This was particularly pronounced when participants viewed mental health symptom summaries: “Well, I answered truthfully but I was a little bit down. I feel like…they made it seem like it was much higher than I felt. I don't get down that much, like once a month maybe. I mean, like anxious, I only get anxious if I'm not feeling well, and that's very rare too. I felt like this wasn't me for an accurate description. You know, I'm in a much better frame of mind since I'm physically better (participant 22).”
Whether or not participants saw value in tracking symptoms in general, versus following more objective indicators such as weight, was another enabling factor. Many participants felt they were able to use their own self-perceptions of symptoms as a substitute for formal reporting and tracking: “I'm sort of lukewarm about it. I would do it as part of a study. But if the study was over, I probably wouldn't do it. I would continue to follow my weight, and my own sense of how I was feeling (participant 19).” One participant noted the need to more explicitly state the reason that symptom tracking is beneficial in addition to other self-management behaviors such as tracking weight and taking medications: “I think you probably need to do a little bit better job of explaining to the participant… the purpose of the survey, the takeaway. In other words, what you, as an institution, hope to gain from this, and also, what you hope to provide to the patient in terms of feedback and end results. And I'm not quite sure how clear you made that (participant 34).”
#
#
Discussion
Scalable and customizable ePRO systems for patients have the potential to meaningfully contribute the patient's perspectives of their own health to both clinical care and research. In this study, we describe an ePRO system that addresses key challenges related to ePRO reporting, and report on the implementation of the system by older adults with HF. The system that was built using REDCap software uses standardized, disease-agnostic PROMIS measures, which supports use by multiple patient populations and among patient populations with multiple comorbid conditions beyond disease-specific ePRO systems. While REDCap is used at over 6,000 institutions worldwide, it may not be freely available at all small and local health care institutions. Nonetheless, the concept of the system we have implemented in this study, in which a Web site displaying a de-identified symptom report built on top of a HIPAA-compliant database management and survey software, may be more broadly scalable across settings.
One drawback to many academic and commercial applications is the lack of patient-facing visualizations which are understandable to patients with low literacy. The opportunity to include custom visualizations that have higher comprehension than standard line graphs was a major driver of our decision to build the system using REDCap versus other platforms. The incorporation of these visualizations as well as other gerontechnological design principles (simple navigation, large buttons, verbose error messages) was intended to improve accessibility to a range of patients, including those with limited technology experience, health literacy, numeracy, or graph literacy. This may also explain why participants reported almost no technical issues using the system, although future work will need to formally evaluate usability in a larger and more diverse sample using objective measures.
This work also highlights several ongoing challenges and areas of future inquiry, particularly the perpetual challenge of sustaining patient engagement over time, which has been well described.[29] Prior studies of PRO reporting suggest that attrition may be reduced by returning data to patients in ways they can understand and find useful—closing the loop of data collection perpetuates further data collection.[30] [31] [32] Therefore, we anticipated that the inclusion of a visualization-based PRO report would bolster engagement. In fact, we did receive positive feedback about the symptom reports in qualitative interviews, consistent with other prior work on PRO visualizations.[15] [33] [34] However, in this study we found low engagement even 2 weeks postrecruitment, which continued to decline over time. Because most participants who completed the first follow-up survey (2 weeks) went on to complete all remaining surveys, it is possible that investing time and resources to aid patients in overcoming the initial hurdle of beginning to independently report PROs could lead to sustained engagement over time.
Additionally, our study showed that some participants questioned PRO scores that did not align with their mental models of their symptom experiences, and others questioned the value of formal symptom reporting compared with tracking objective metrics such as weight. Educating patients on the value of self-reporting symptom changes, which may be insidious and therefore difficult to detect and yet highly correlated with impeding exacerbations, may also improve engagement.[7] Moreover, in employing user-centered design for engagement, there is a constant tension between broad usability versus customization to one user subgroup's needs. In this study, we aimed for inclusive design, or designing for those who may have the lowest literacy, numeracy, or technology comfort,[35] but may have resulted in a tool that was less engaging for those looking for more sophisticated solutions.
Multimodal reminder strategies including email, text messaging, and phone calls, or combinations of these strategies, could also help improve engagement and would address the primary barriers reported in this study of simply forgetting to complete surveys or losing track of an email. Our findings show that more participants who received reminders via emails completed follow-up surveys compared with text messages, and in qualitative interviews many participants reported preferring to completing surveys on computers. This suggests that, despite the proliferation of smartphones enabling mobile data reporting, computer-based reporting may still be more effective for many participants. In fact, all of the participants who completed follow-up surveys possessed an email address (even if they opted to receive alerts via text message). In addition, clear labeling in the reminder notifications about specific surveys to complete, and for which time point, could also improve completion rates. Other studies have also reported success in asking patients to electronically complete PROs as part of the check-in process for clinical visits.[29] A final strategy for optimizing engagement could be to reduce response burden through computer adaptive testing (CT). The ePRO system reported here does not utilize CT because the summary report requires specific data points for variables that are visualized, which may be omitted through CT. However, CT PROMIS measures and other CT surveys are widely available in REDCap. Thus, an area for future exploration is the ability to use CT measures to reduce participant reporting burden while also preserving the ability to visualize the data in summary reports.
Another ongoing challenge is integration of PROs into clinical care. Data sharing among patients, clinicians, and institutional warehouses is complicated by issues surrounding patient access, data governance, and legal/regulatory compliance.[36] [37] EHR integration is challenging, but possible with adequate stakeholder buy-in and consideration of the complex workflows and data management strategies needed to accommodate such data.[29] [38] Although EHR integration was not implemented in the current version of the ePRO system, the inclusion of standardized measures such as PROMIS aids EHR integration efforts. An important unanswered question is how clinicians prefer to visualize and use PRO data when providing clinical care, as studies have shown that PROs are not being widely used to inform practice.[39] [40]
Finally, there is also a philosophical question about the degree of EHR integration that is desired by patients, and more broadly, whether patients should be in control of data sharing across recipients and health systems. The ePRO system reported here places the patient in control, but these questions remain a rich area of future bioethics inquiry.
Study limitations include that it was conducted at a single academic medical center with a predominantly white, male, non-Hispanic, highly educated patient sample. The age of participants was also slightly younger than the median age of HF patients, which is estimated to be mid-70s,[41] and most participants had an email address, one indicator of technology experience. Implementation evaluation in large and more diverse samples is needed to ensure accessibility to all patients who may benefit from using this system. Moreover, interview participants were highly engaged compared with the rest of the sample, limiting our insights related to sustained engagement.
#
Conclusion
Transforming the landscape of ePRO systems from numerous proprietary, institution- and disease-specific tools to freely available, scalable, disease-agnostic systems has the potential to democratize PRO reporting and sharing. Currently, patients who may benefit from reporting, sharing, and maintaining records of PROs are unable to do so if the health systems and research enterprises in which they seek care and participate in research do not widely offer ePRO systems. Systems that attempt to address these barriers, such as the one described here, may pave the road toward inclusion of a wider, more diverse range of health systems and patients in ePRO collection and use.
#
Clinical Relevance Statement
Patient-reported outcomes (PROs) can raise patient awareness of insidious changes in health status that are otherwise difficult to quantify and can facilitate patient–provider communication to guide treatment decisions aligned with patients' goals of care. The ability to capture PROs electronically (ePROs) offers added capabilities including more streamlined data collection, aggregation, visualization, and sharing with parties involved in the patient's care compared with paper-based reporting. Transforming the landscape of ePRO systems from numerous proprietary, institution- and disease-specific tools to freely available, scalable, disease-agnostic systems has the potential to democratize PRO reporting and sharing.
#
Multiple-Choice Questions
-
It is estimated that what proportion of the United States population has low graph literacy?
-
5%
-
20%
-
40%
-
75%
Correct Answer: The correct answer is option c. Prior studies estimate that 40% of the United States population cannot accurately read a graph.
-
-
When implementing patient-reported outcome monitoring systems, what is one of the biggest challenges to anticipate among patients?
-
Declining engagement over time
-
Confusion answering PRO surveys
-
Inability to use simple survey software
-
Unwillingness to share health data with their care team
Correct Answer: The correct answer is option a. While other options may be a challenge, this study and several others confirm that sustained patient engagement remains a top issue in collecting PROs and other patient-generated data.
-
-
One proven strategy to improve patient comprehension of their own health data is:
-
Withholding some or all data from patients who care teams think will struggle with comprehension.
-
Data visualizations developed using user-centered design with patients.
-
Sophisticated data analytics techniques using advanced computational methods.
-
Using medical definitions to define unknown terms.
Correct Answer: The correct answer is option b. This study and several others have demonstrated that data visualizations improve objective comprehension beyond text alone. Options c and d may increase confusion as they will not be comprehensible for patients with limited health literacy, numeracy, and graph literacy. Option a may be patronizing and introduce implicit bias because perceptions of health literacy and willingness to engage in one's own health data may be linked to harmful stereotypes.
-
#
#
Conflict of Interest
M.R.T. is a consultant for Boston Scientific Corp. and has equity in Iris OB Health Inc (New York). The remaining authors have no disclosures.
Protection of Human and Animal Subjects
This study was reviewed and approved by the Weill Cornell Medicine Institutional Review Board.
-
References
- 1 Lavallee DC, Chenok KE, Love RM. et al. Incorporating patient-reported outcomes into health care to engage patients and enhance care. Health Aff (Millwood) 2016; 35 (04) 575-582
- 2 National Quality Forum. Patient-reported outcomes. Accessed January 22, 2023 at: https://www.qualityforum.org/Publications/2012/12/Patient-Reported_Outcomes_in_Performance_Measurement.aspx
- 3 Warsame R, D'Souza A. Patient reported outcomes have arrived: a practical overview for clinicians in using patient reported outcomes in oncology. Mayo Clin Proc 2019; 94 (11) 2291-2301
- 4 Basch E. Patient-reported outcomes - harnessing patients' voices to improve clinical care. N Engl J Med 2017; 376 (02) 105-108
- 5 Burns DJP, Arora J, Okunade O. et al. International Consortium for Health Outcomes Measurement (ICHOM): standardized patient-centered outcomes measurement set for heart failure patients. JACC Heart Fail 2020; 8 (03) 212-222
- 6 Field J, Holmes MM, Newell D. PROMs data: can it be used to make decisions for individual patients? A narrative review. Patient Relat Outcome Meas 2019; 10: 233-241
- 7 Jonkman NH, Westland H, Groenwold RHH. et al. Do self-management interventions work in patients with heart failure? An individual patient data meta-analysis. Circulation 2016; 133 (12) 1189-1198
- 8 Vodicka E, Kim K, Devine EB, Gnanasakthy A, Scoggins JF, Patrick DL. Inclusion of patient-reported outcome measures in registered clinical trials: evidence from ClinicalTrials.gov (2007-2013). Contemp Clin Trials 2015; 43: 1-9
- 9 Scoggins JF, Patrick DL. The use of patient-reported outcomes instruments in registered clinical trials: evidence from ClinicalTrials.gov. Contemp Clin Trials 2009; 30 (04) 289-292
- 10 Schwartzberg L. Electronic patient-reported outcomes: the time is ripe for integration into patient care and clinical research. Am Soc Clin Oncol Educ Book 2016; 35: e89-e96
- 11 Aiyegbusi OL, Nair D, Peipert JD, Schick-Makaroff K, Mucsi I. A narrative review of current evidence supporting the implementation of electronic patient-reported outcome measures in the management of chronic diseases. Ther Adv Chronic Dis 2021; 12: 20406 223211015958
- 12 Hassett MJ, Cronin C, Tsou TC. et al. eSyM: an electronic health record-integrated patient-reported outcomes-based cancer symptom management program used by six diverse health systems. JCO Clin Cancer Inform 2022; 6 (06) e2100137
- 13 Zylla DM, Gilmore GE, Steele GL. et al. Collection of electronic patient-reported symptoms in patients with advanced cancer using Epic MyChart surveys. Support Care Cancer 2020; 28 (07) 3153-3163
- 14 Galesic M, Garcia-Retamero R. Graph literacy: a cross-cultural comparison. Med Decis Making 2011; 31 (03) 444-457
- 15 Reading Turchioe M, Grossman LV, Myers AC, Baik D, Goyal P, Masterson Creber RM. Visual analogies, not graphs, increase patients' comprehension of changes in their health status. J Am Med Inform Assoc 2020; 27 (05) 677-689
- 16 Hawley ST, Zikmund-Fisher B, Ubel P, Jancovic A, Lucas T, Fagerlin A. The impact of the format of graphical presentation on health-related knowledge and treatment choices. Patient Educ Couns 2008; 73 (03) 448-455
- 17 Zikmund-Fisher BJ, Scherer AM, Witteman HO. et al. Graphics help patients distinguish between urgent and non-urgent deviations in laboratory test results. J Am Med Inform Assoc 2017; 24 (03) 520-528
- 18 REDCap (Research Electronic Data Capture). Accessed January 22, 2023 at: https://www.project-redcap.org/
- 19 Masterson Creber RM, Hickey KT, Maurer MS. Gerontechnologies for older patients with heart failure: what is the role of smartphones, tablets, and remote monitoring devices in improving symptom monitoring and self-care management?. Curr Cardiovasc Risk Rep 2016; 10 (10) 30
- 20 Zhang J, Johnson TR, Patel VL, Paige DL, Kubose T. Using usability heuristics to evaluate patient safety of medical devices. J Biomed Inform 2003; 36 (1–2): 23-30
- 21 E-mail usage in the United States - statistics & facts. Statista. Accessed November 18, 2022 at: https://www.statista.com/topics/4295/e-mail-usage-in-the-united-states/
- 22 Sheet MF. Pew Research Center: internet, science & tech. Published April 7, 2021. Accessed November 18, 2022 at: https://www.pewresearch.org/internet/fact-sheet/mobile/
- 23 Baldwin JL, Singh H, Sittig DF, Giardina TD. Patient portals and health apps: pitfalls, promises, and what one might learn from the other. Healthc (Amst) 2017; 5 (03) 81-85
- 24 Glasgow RE, Harden SM, Gaglio B. et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health 2019; 7: 64
- 25 Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004; 36 (08) 588-594
- 26 Assarroudi A, Heshmati Nabavi F, Armat MR, Ebadi A, Vaismoradi M. Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process. J Res Nurs 2018; 23 (01) 42-55
- 27 Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005; 15 (09) 1277-1288
- 28 Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today 2004; 24 (02) 105-112
- 29 Rosett HA, Herring K, Ratliff W, Koontz BF, Zafar Y, LeBlanc TW. Integration of electronic patient-reported outcomes into clinical workflows within the Epic electronic medical record. J Clin Orthod 2019; 37 (31, suppl): 102
- 30 Reading Turchioe M, Burgermaster M, Mitchell EG, Desai PM, Mamykina L. Adapting the stage-based model of personal informatics for low-resource communities in the context of type 2 diabetes. J Biomed Inform 2020; 110: 103572
- 31 Reading M, Baik D, Beauchemin M, Hickey KT, Merrill JA. Factors influencing sustained engagement with ECG self-monitoring: perspectives from patients and health care providers. Appl Clin Inform 2018; 9 (04) 772-781
- 32 Miyamoto SW, Henderson S, Young HM, Pande A, Han JJ. Tracking health data is not enough: a qualitative exploration of the role of healthcare partnerships and mhealth technology to promote physical activity and to sustain behavior change. JMIR Mhealth Uhealth 2016; 4 (01) e5
- 33 Snyder LE, Phan DF, Williams KC. et al. Comprehension, utility, and preferences of prostate cancer survivors for visual timelines of patient-reported outcomes co-designed for limited graph literacy: meters and emojis over comics. J Am Med Inform Assoc 2022; 29 (11) 1838-1846
- 34 Stonbraker S, Porras T, Schnall R. Patient preferences for visualization of longitudinal patient-reported outcomes data. J Am Med Inform Assoc 2020; 27 (02) 212-224
- 35 Benda NC, Montague E, Valdez RS. Chapter 15 - Design for inclusivity. In: Sethumadhavan A, Sasangohar F, eds. Design for Health. Cambridge, MA: Academic Press; 2020: 305-322
- 36 Platt J, Spector-Bagdady K, Platt T. et al. Ethical, legal, and social implications of learning health systems. Learn Health Syst 2018; 2 (01) e10051
- 37 Thorpe JH. Cartwright-Smith, L Gray, E Mongeon, M. Legal and ethical architecture for PCOR data. George Washington University; 2017. Accessed January 22, 2023 at: https://www.healthit.gov/sites/default/files/page/2018-06/PCOR%20Architecture%20%28MERGE%29%20updated%20Appendix%20B.pdf
- 38 Gensheimer SG, Wu AW, Snyder CF. PRO-EHR Users' Guide Steering Group, PRO-EHR Users' Guide Working Group. Oh, the places we'll go: patient-reported outcomes and electronic health records. Patient 2018; 11 (06) 591-598
- 39 Brundage MD, Wu AW, Rivera YM, Snyder C. Promoting effective use of patient-reported outcomes in clinical practice: themes from a “Methods Tool kit” paper series. J Clin Epidemiol 2020; 122: 153-159
- 40 Skovlund PC, Ravn S, Seibaek L, Thaysen HV, Lomborg K, Nielsen BK. The development of PROmunication: a training-tool for clinicians using patient-reported outcomes to promote patient-centred communication in clinical cancer settings. J Patient Rep Outcomes 2020; 4 (01) 10
- 41 Christiansen MN, Køber L, Weeke P. et al. Age-specific trends in incidence, mortality, and comorbidities of heart failure in Denmark, 1995 to 2012. Circulation 2017; 135 (13) 1214-1223
Address for correspondence
Publication History
Received: 08 August 2022
Accepted: 04 January 2023
Accepted Manuscript online:
05 January 2023
Article published online:
22 March 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Lavallee DC, Chenok KE, Love RM. et al. Incorporating patient-reported outcomes into health care to engage patients and enhance care. Health Aff (Millwood) 2016; 35 (04) 575-582
- 2 National Quality Forum. Patient-reported outcomes. Accessed January 22, 2023 at: https://www.qualityforum.org/Publications/2012/12/Patient-Reported_Outcomes_in_Performance_Measurement.aspx
- 3 Warsame R, D'Souza A. Patient reported outcomes have arrived: a practical overview for clinicians in using patient reported outcomes in oncology. Mayo Clin Proc 2019; 94 (11) 2291-2301
- 4 Basch E. Patient-reported outcomes - harnessing patients' voices to improve clinical care. N Engl J Med 2017; 376 (02) 105-108
- 5 Burns DJP, Arora J, Okunade O. et al. International Consortium for Health Outcomes Measurement (ICHOM): standardized patient-centered outcomes measurement set for heart failure patients. JACC Heart Fail 2020; 8 (03) 212-222
- 6 Field J, Holmes MM, Newell D. PROMs data: can it be used to make decisions for individual patients? A narrative review. Patient Relat Outcome Meas 2019; 10: 233-241
- 7 Jonkman NH, Westland H, Groenwold RHH. et al. Do self-management interventions work in patients with heart failure? An individual patient data meta-analysis. Circulation 2016; 133 (12) 1189-1198
- 8 Vodicka E, Kim K, Devine EB, Gnanasakthy A, Scoggins JF, Patrick DL. Inclusion of patient-reported outcome measures in registered clinical trials: evidence from ClinicalTrials.gov (2007-2013). Contemp Clin Trials 2015; 43: 1-9
- 9 Scoggins JF, Patrick DL. The use of patient-reported outcomes instruments in registered clinical trials: evidence from ClinicalTrials.gov. Contemp Clin Trials 2009; 30 (04) 289-292
- 10 Schwartzberg L. Electronic patient-reported outcomes: the time is ripe for integration into patient care and clinical research. Am Soc Clin Oncol Educ Book 2016; 35: e89-e96
- 11 Aiyegbusi OL, Nair D, Peipert JD, Schick-Makaroff K, Mucsi I. A narrative review of current evidence supporting the implementation of electronic patient-reported outcome measures in the management of chronic diseases. Ther Adv Chronic Dis 2021; 12: 20406 223211015958
- 12 Hassett MJ, Cronin C, Tsou TC. et al. eSyM: an electronic health record-integrated patient-reported outcomes-based cancer symptom management program used by six diverse health systems. JCO Clin Cancer Inform 2022; 6 (06) e2100137
- 13 Zylla DM, Gilmore GE, Steele GL. et al. Collection of electronic patient-reported symptoms in patients with advanced cancer using Epic MyChart surveys. Support Care Cancer 2020; 28 (07) 3153-3163
- 14 Galesic M, Garcia-Retamero R. Graph literacy: a cross-cultural comparison. Med Decis Making 2011; 31 (03) 444-457
- 15 Reading Turchioe M, Grossman LV, Myers AC, Baik D, Goyal P, Masterson Creber RM. Visual analogies, not graphs, increase patients' comprehension of changes in their health status. J Am Med Inform Assoc 2020; 27 (05) 677-689
- 16 Hawley ST, Zikmund-Fisher B, Ubel P, Jancovic A, Lucas T, Fagerlin A. The impact of the format of graphical presentation on health-related knowledge and treatment choices. Patient Educ Couns 2008; 73 (03) 448-455
- 17 Zikmund-Fisher BJ, Scherer AM, Witteman HO. et al. Graphics help patients distinguish between urgent and non-urgent deviations in laboratory test results. J Am Med Inform Assoc 2017; 24 (03) 520-528
- 18 REDCap (Research Electronic Data Capture). Accessed January 22, 2023 at: https://www.project-redcap.org/
- 19 Masterson Creber RM, Hickey KT, Maurer MS. Gerontechnologies for older patients with heart failure: what is the role of smartphones, tablets, and remote monitoring devices in improving symptom monitoring and self-care management?. Curr Cardiovasc Risk Rep 2016; 10 (10) 30
- 20 Zhang J, Johnson TR, Patel VL, Paige DL, Kubose T. Using usability heuristics to evaluate patient safety of medical devices. J Biomed Inform 2003; 36 (1–2): 23-30
- 21 E-mail usage in the United States - statistics & facts. Statista. Accessed November 18, 2022 at: https://www.statista.com/topics/4295/e-mail-usage-in-the-united-states/
- 22 Sheet MF. Pew Research Center: internet, science & tech. Published April 7, 2021. Accessed November 18, 2022 at: https://www.pewresearch.org/internet/fact-sheet/mobile/
- 23 Baldwin JL, Singh H, Sittig DF, Giardina TD. Patient portals and health apps: pitfalls, promises, and what one might learn from the other. Healthc (Amst) 2017; 5 (03) 81-85
- 24 Glasgow RE, Harden SM, Gaglio B. et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health 2019; 7: 64
- 25 Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004; 36 (08) 588-594
- 26 Assarroudi A, Heshmati Nabavi F, Armat MR, Ebadi A, Vaismoradi M. Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process. J Res Nurs 2018; 23 (01) 42-55
- 27 Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005; 15 (09) 1277-1288
- 28 Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today 2004; 24 (02) 105-112
- 29 Rosett HA, Herring K, Ratliff W, Koontz BF, Zafar Y, LeBlanc TW. Integration of electronic patient-reported outcomes into clinical workflows within the Epic electronic medical record. J Clin Orthod 2019; 37 (31, suppl): 102
- 30 Reading Turchioe M, Burgermaster M, Mitchell EG, Desai PM, Mamykina L. Adapting the stage-based model of personal informatics for low-resource communities in the context of type 2 diabetes. J Biomed Inform 2020; 110: 103572
- 31 Reading M, Baik D, Beauchemin M, Hickey KT, Merrill JA. Factors influencing sustained engagement with ECG self-monitoring: perspectives from patients and health care providers. Appl Clin Inform 2018; 9 (04) 772-781
- 32 Miyamoto SW, Henderson S, Young HM, Pande A, Han JJ. Tracking health data is not enough: a qualitative exploration of the role of healthcare partnerships and mhealth technology to promote physical activity and to sustain behavior change. JMIR Mhealth Uhealth 2016; 4 (01) e5
- 33 Snyder LE, Phan DF, Williams KC. et al. Comprehension, utility, and preferences of prostate cancer survivors for visual timelines of patient-reported outcomes co-designed for limited graph literacy: meters and emojis over comics. J Am Med Inform Assoc 2022; 29 (11) 1838-1846
- 34 Stonbraker S, Porras T, Schnall R. Patient preferences for visualization of longitudinal patient-reported outcomes data. J Am Med Inform Assoc 2020; 27 (02) 212-224
- 35 Benda NC, Montague E, Valdez RS. Chapter 15 - Design for inclusivity. In: Sethumadhavan A, Sasangohar F, eds. Design for Health. Cambridge, MA: Academic Press; 2020: 305-322
- 36 Platt J, Spector-Bagdady K, Platt T. et al. Ethical, legal, and social implications of learning health systems. Learn Health Syst 2018; 2 (01) e10051
- 37 Thorpe JH. Cartwright-Smith, L Gray, E Mongeon, M. Legal and ethical architecture for PCOR data. George Washington University; 2017. Accessed January 22, 2023 at: https://www.healthit.gov/sites/default/files/page/2018-06/PCOR%20Architecture%20%28MERGE%29%20updated%20Appendix%20B.pdf
- 38 Gensheimer SG, Wu AW, Snyder CF. PRO-EHR Users' Guide Steering Group, PRO-EHR Users' Guide Working Group. Oh, the places we'll go: patient-reported outcomes and electronic health records. Patient 2018; 11 (06) 591-598
- 39 Brundage MD, Wu AW, Rivera YM, Snyder C. Promoting effective use of patient-reported outcomes in clinical practice: themes from a “Methods Tool kit” paper series. J Clin Epidemiol 2020; 122: 153-159
- 40 Skovlund PC, Ravn S, Seibaek L, Thaysen HV, Lomborg K, Nielsen BK. The development of PROmunication: a training-tool for clinicians using patient-reported outcomes to promote patient-centred communication in clinical cancer settings. J Patient Rep Outcomes 2020; 4 (01) 10
- 41 Christiansen MN, Køber L, Weeke P. et al. Age-specific trends in incidence, mortality, and comorbidities of heart failure in Denmark, 1995 to 2012. Circulation 2017; 135 (13) 1214-1223