CC BY-NC-ND 4.0 · Appl Clin Inform 2024; 15(01): 145-154
DOI: 10.1055/a-2235-9557
Case Report

Seamless Integration of Computer-Adaptive Patient Reported Outcomes into an Electronic Health Record

1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
2   Department of Preventative Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
Zeeshan Butt
3   Phreesia, Inc, Clinical Content, Wilmington, DE, USA
4   Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
5   Department of Nursing Quality, Stanford Health Care, Stanford, California, United States
,
1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
6   Department of General Internal Medicine, Feinberg School of Medicine, Northwestern University and Northwestern Memorial HealthCare, Chicago, Illinois, United States
,
Ryan Chmiel
7   Department of Information Services, Northwestern Memorial HealthCare, Chicago, Illinois, United States
,
Federico Almaraz
7   Department of Information Services, Northwestern Memorial HealthCare, Chicago, Illinois, United States
,
Michael Schachter
7   Department of Information Services, Northwestern Memorial HealthCare, Chicago, Illinois, United States
,
8   Clinical and Translational Sciences Institute, Northwestern University, Chicago, Illinois, United States
,
Michelle Langer
1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
,
Justin Starren
1   Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
2   Department of Preventative Medicine, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, United States
8   Clinical and Translational Sciences Institute, Northwestern University, Chicago, Illinois, United States
› Author Affiliations
Funding The initial development of NMPRO was funded by the Vice Dean for Scientific Affairs and Graduate Education at the Feinberg School of Medicine. NMPRO and J.S. were funded by UL1TR000150 and UL1TR001422 from the National Center for Advancing Translational Science (NCATS). Evaluation and lessons learned development was funded by U01TR001806 from NCATS. K.N. was supported by NIH/NCI training grant CA193193. In kind implementation support was provided by Northwestern Memorial HealthCare.
 

Abstract

Background Patient-reported outcome (PRO) measures have become an essential component of quality measurement, quality improvement, and capturing the voice of the patient in clinical care. In 2004, the National Institutes of Health endorsed the importance of PROs by initiating the Patient-Reported Outcomes Measurement Information System (PROMIS), which leverages computer-adaptive tests (CATs) to reduce patient burden while maintaining measurement precision. Historically, PROMIS CATs have been used in a large number of research studies outside the electronic health record (EHR), but growing demand for clinical use of PROs requires creative information technology solutions for integration into the EHR.

Objectives This paper describes the introduction of PROMIS CATs into the Epic Systems EHR at a large academic medical center using a tight integration; we describe the process of creating a secure, automatic connection between the application programming interface (API) which scores and selects CAT items and Epic.

Methods The overarching strategy was to make CATs appear indistinguishable from conventional measures to clinical users, patients, and the EHR software itself. We implemented CATs in Epic without compromising patient data security by creating custom middleware software within the organization's existing middleware framework. This software communicated between the Assessment Center API for item selection and scoring and Epic for item presentation and results. The middleware software seamlessly administered CATs alongside fixed-length, conventional PROs while maintaining the display characteristics and functions of other Epic measures, including automatic display of PROMIS scores in the patient's chart. Pilot implementation revealed differing workflows for clinicians using the software.

Results The middleware software was adopted in 27 clinics across the hospital system. In the first 2 years of hospital-wide implementation, 793 providers collected 70,446 PROs from patients using this system.

Conclusion This project demonstrated the importance of regular communication across interdisciplinary teams in the design and development of clinical software. It also demonstrated that implementation relies on buy-in from clinical partners as they integrate new tools into their existing clinical workflow.


#

Background and Significance

Patient-reported outcome (PRO) measures are an essential component of quality measurement, quality improvement, and capturing the voice of the patient in clinical care and research.[1] [2] In 2004, the National Institutes of Health (NIH) endorsed the importance of PROs by initiating the Patient-Reported Outcomes Measurement Information System (PROMIS). The primary goal of PROMIS is to standardize the measurement of common symptoms, functions, and other aspects of self-reported health to enable efficient and interpretable clinical trial and clinical practice PRO applications.[3] [4] Each PROMIS measure addresses a specific symptom such as fatigue, physical function, dyspnea, social function, and so on. Currently, PROMIS includes over 300 measures of physical, mental, and social function in both adult and pediatric populations.[5]

A critical component of PROMIS has been the development of computer-adaptive testing measures (CATs).[6] CATs were developed using item response theory so that they administer maximally informative items (questions) selected from a large bank of items regarding a specific symptom. In short, the items in each CAT are tailored to the respondent based on their responses to prior items. [Fig. 1] shows a schematic representation of CAT administration. Each successive item, or question, is selected based on a probabilistic model that takes into account the statistical properties of the items themselves as well as the participant's responses. This statistical approach is effective for maximizing measurement precision while minimizing measure length.[6] This reduced length enables precise scores without undue participant response burden.[7] Response burden is associated with less accurate PRO completion, especially for patients with low literacy or clinics with high patient volume, as patients may run out of time before completing a measure or quickly fill in answers to reach the end of longer measures.[8] Fixed-length PRO measures are frequently longer in length than CATs and take longer to administer.[9]

Zoom Image
Fig. 1 Computer-adaptive testing event loop. The survey begins with an assumption of an average T-score of 50, the general population norm. Based on the patient's response to the first question, the next item is selected to give maximal additional information. The cycle is repeated until the confidence in the result is sufficiently high (in other words, the standard error is sufficiently low) or the maximum number of questions is reached.

PROMIS CATs addressing pain, fatigue, physical function, social function, and affect have each demonstrated clinical validity across a wide range of health conditions.[10]

PROMIS CATs were initially accessed and administered via a web-based application called Assessment Center (https://www.assessmentcenter.net/).[4] [11] However, Assessment Center was not originally designed for clinical use, thus did not have features to integrate with any electronic health record (EHR). Consequently, early methods of integrating PROMIS CATs with EHRs relied on a multistep, manual process. For example, patients could first be sent a weblink in a secure patient-messaging portal. Then, the patient would log into Assessment Center and complete the assigned CATs. Next, their provider would log onto their Assessment Center account to access the patient's scores. Finally, the provider would enter the scores into the patient's notes within the EHR. This multistep, manual (“loose”) coupling of CATs and the EHR has several disadvantages: (1) it requires patients to deal with multiple systems, each with a different look-and-feel; (2) it increases both complexity and privacy risk by maintaining patient identifiers and survey schedules in multiple systems; (3) it often results in solutions that are not easily leveraged across different contexts (e.g., a medical assistant manually transferring scores from an Assessment Center report into the EHR); (4) it results in a delay between PRO administration and entry into the EHR which limits PRO use in clinical encounters. For EHR software that supports PRO collection such as Epic (Epic Systems, Verona, WI), this loose coupling also means that PROMIS CAT scores are not displayed alongside other PRO results and cannot utilize Epic's data visualization features.

To address the problems of PROMIS CAT integration into EHRs across multiple health systems, the EHR Access to Seamless Integration of Patient-Reported Outcomes (EASIPRO) consortium funded by the NIH National Center for Advancing Translational Sciences (NCATS) pooled the efforts of nine institutions. This case report describes the technical approach of one specific institution, Northwestern Medicine (NM). NM was the first hospital system to fully integrate PROMIS CATs into Epic. The project implementing PROMIS CATs into NM and Epic was titled Northwestern Medicine Patient-Reported Outcomes (NMPRO).


#

Objectives

In order for clinicians to utilize PROMIS CAT scores in clinical encounters, the optimal solution is to integrate CATs seamlessly and automatically into the EHR workflow.[12] However, CATs require real-time score computation which is generally housed outside of the EHR. Thus, the main objective of NMPRO was to develop custom software that tightly integrated Assessment Center functionality with Epic EHR for seamless, automatic administration and display of PROMIS CAT scores in clinical encounters.


#

Methods

For the design and software development phase that kicked off NMPRO, an interdisciplinary project team of 24 members was formed. The project team consisted of representatives from the institution's health care clinical information technology (IT) department who specialized in research-focused programming, research informatics, academic specialists from PROMIS, hospital quality assurance, clinic management, and clinician champions from two departments: Orthopedic Surgery and the Cancer Center. These team members were structured into two groups: (1) the working group, or members of IT and research informatics who would build NMPRO software which met on a biweekly basis, and (2) the steering committee, consisting of the remaining academic and clinical partners whose expertise would inform development which met with the working group on a quarterly basis. The joint project team collaborated on all aspects of the project including determination of design criteria, workflow modeling, software architecture, middleware design and development, testing, pilot implementation, and monitoring of hospital-wide implementation. By bringing all stakeholders into one team, development time was significantly reduced because clinical and academic concerns could be addressed in earlier design phases rather than after software development.

Design Criteria

To begin, the project team identified design criteria (see [Table 1]). These design criteria were determined first by the clinical members of the project team, including providers and clinical staff, specifying their initial needs which were further refined by iterative discussions among all team members. Although patients were not directly represented in the design phase, academic partners from the PROMIS team referenced their previous patient-centered implementation experiences as well as published literature on the patient experience to ensure that patient needs were being addressed (e.g., in Section 6 of [Table 1]).

Table 1

Northwestern Medicine patient-reported outcomes' design criteria

Process step

The system should support

1. Ordering

a. Computer-adaptive tests (CATs) orderable by clinician using normal Epic electronic health record (EHR) ordering process

b. CATs can be triggered based on several kinds of events, such as a clinic visit or a surgical procedure

2. Scheduling

a. Creation of preset timed series of CATs that are ordered or triggered once, but delivered to patient at specified intervals

3. Bundling

a. Multiple patient-reported outcome (PRO) instruments bundled into a single orderable unit

b. Ability to combine CATs and conventional, fixed-length PROs into a single bundle

4. Monitoring

a. Clinician able to monitor completion status of CATs

5. Notification

a. Patient notified that a CAT is due

b. Clinician notified when CAT is complete and results ready for review

c. Selected results identified and pushed to designated staff for immediate attention (i.e., severe Depression score referred to social work)

d. Results routed to different staff for different CAT domains or score ranges

6. Completion

a. Patient able to complete CAT through EHR portal with familiar look and feel

b. Each CAT item has vertical layout of response options as recommended by psychometricians

c. Patient able to complete CAT in the waiting area at arrival at an appointment if it was not completed in the EHR portal

d. Patient able to stop and restart without loss of prior answers

e. Patient able to complete CAT in clinic even if they have not activated EHR portal account

7. Result delivery

a. Result delivered to the clinician faster than patient can walk from the waiting area to the examination room.

8. Result storage and display

a. Result stored with other survey-type data in the EHR, not in a generic, catch-all result type

b. Results displayed using EHR-internal display routines

c. Graphing and trending of results over time

d. Results transferred to the hospital's Enterprise Data Warehouse with other EHR results

9. Alerting

a. Triggering of clinical decision support rules based on results

10. Scaling

a. System is scalable to entire enterprise


#

Software Architecture and Middleware Development

Next, the clinical IT team members defined the software architecture necessary to achieve the stated design criteria. Based on the hospital's existing IT infrastructure, they identified three components as needing development: a way to access Assessment Center's CAT administration functions separately from Assessment Center's study management functions, custom middleware within the hospital's existing middleware framework to manage the connection between Epic and Assessment Center, and custom code within Epic (see [Fig. 2]). Academic team members from PROMIS were crucial in this phase, as their knowledge of the theoretical background and statistical structure of CATs ensured that CATs could be accurately administered within the software.

Zoom Image
Fig. 2 Project design architecture. The main software developed for this project was the custom survey management middleware (top left) housed within an existing Northwestern Memorial Healthcare enterprise service bus (“NMH Framework”). The NMPRO project also developed custom code for multiple aspects of Epic (blue box). NMPRO made use of the newly developed Assessment Center API (green). Patient CAT scores were stored within the NMH Research Database within the NMH Framework for access by the middleware and Epic (bottom). API, application programming interface; NMPRO, Northwestern Medicine Patient-Reported Outcomes; PROMIS, Patient-Reported Outcomes Measurement Information System.

Assessment Center Application Programming Interface

To address the first hurdle, the Assessment Center Application Programming Interface (AC-API) was created to support the administration of individual CATs without requiring use of Assessment Center study management functions.[4] [11] [13]


#

Northwestern Medicine Patient-Reported Outcomes Middleware

Next was the creation of custom middleware. The NMPRO middleware was housed within the hospital's existing general-purpose middleware framework for integrating external software systems into Epic. The primary role of the NMPRO middleware was to manage and map states between Epic and AC-API. CATs are highly stateful: when a patient responds to a given item, their cumulative score is adjusted, and the adjusted score is then used to determine the next item to display, or whether to stop item administration altogether (see [Fig. 3]).

Zoom Image
Fig. 3 Swimlane diagram of the initiation of a PROMIS CAT survey. CAT, computer-adaptive test; EHR, electronic health record; PRO, patient-reported outcome measures.

The second major role of the middleware was creating a survey object within Epic that can be mapped to the appropriate PROMIS CAT items and the CAT final score. We constructed each CAT as a very large questionnaire with each question corresponding to an item in the CAT item bank (e.g., 173 items for the Physical Function item bank). The AC-API directed the middleware to display the appropriate item according to the statistical model (e.g., 4–12 specific items as described in [Fig. 1]). In the end, this results in a survey that Epic would recognize as having many unanswered items, but the AC-API would be able to score. The Epic model of a CAT is shown in [Fig. 4]. Besides the questions representing the item bank, an additional scoring question was created so that the AC-API could report the final score of the CAT.

Zoom Image
Fig. 4 Data model for integration of CATs into conventional EHR format. A dummy questionnaire is created in the EHR that duplicates all items in the CAT item bank. The CAT is modeled as a fixed-length PRO with many unanswered questions. CAT, computer-adaptive test; EHR, electronic health record; PRO, patient-reported outcome.

Finally, the middleware supported combining multiple CATs, or even CATs and other PROs, into a single assessment to be presented to the patient and clinician. For example, in [Fig. 5], four PROMIS CATs are combined into one assessment and scores are displayed simultaneously to a clinician in Epic's user interface.

Zoom Image
Fig. 5 Screenshot of PROMIS data displayed by native EHR survey module, as seen by a provider. This display went through several iterations based on feedback from clinicians during software development. EHR, electronic health record; PROMIS, Patient-Reported Outcomes Measurement Information System.

Notably, as seen in [Fig. 3], the data transferred between the framework and the AC-API included only session, current question, and final score. Therefore, it is possible to host the AC-API in the cloud or on a publicly accessible server with minimal privacy risk. Even so, to eliminate the need for any computing outside of our clinical network, we implemented the AC-API on a virtual machine that ran within our clinical server farm to maximize data security.


#

Adjustments to Epic

Within Epic, the overall strategy was to “trick” the EHR into treating CATs like other, nonadaptive, fixed-length PROs. This allowed use of Epic's existing ordering, monitoring, and data display functions. Specifically, the survey data structure in Epic was modified to include a flag indicating whether the PRO was conventional (i.e., a fixed-length PRO using Epic's existing survey functions) or CAT. Epic's outpatient survey administration module was modified to check for this flag. If a CAT was detected, control of survey administration was passed to a new CAT module that referenced the AC-API. Once the CAT was completed, the data were stored in the EHR for immediate viewing by clinicians and control returned to the EHR survey module.

The display of CATs within Epic was carefully considered because PROMIS CATs were validated with choices arranged vertically and only one item displayed at a time; we ensured that they were displayed the same way within Epic's existing user interface. This also supports accurate completion on patients' mobile devices and clinic tablets.[14] [Fig. 6] shows the final layout for an individual CAT item after iterative feedback from the PROMIS psychometricians within the steering committee.

Zoom Image
Fig. 6 A screenshot of a PROMIS question as seen by a patient. This display went through several iterations based on feedback from psychometricians during software development. PROMIS, Patient-Reported Outcomes Measurement Information System.

During development, we became concerned about potential data analyst input errors when configuring PROMIS CATs within the middleware due to the large number of questions we built within Epic. To reduce the risk of analyst input errors, we built a data validation tool that compared the Assessment Center data to the Epic data to identify discrepancies. This quality assurance task required deep integration into the Epic EHR and involved modifying standard Epic software code (see [Fig. 2]). We did our best to ensure that modifications were compatible with newer versions of Epic; still, one Epic update conflicted with our customizations which prevented patients from completing CATs for a few days while the problem was identified and resolved. Our testing and validation procedures for Epic updates were modified to include an extra step of checking NMPRO functionality before installation.


#
#

Implementation and Monitoring

Existing work describes piloting and implementation of NMPRO in two departments: orthopaedics and oncology.[15] [16] These publications include step-by-step instructions and guidance for implementation in other hospital systems. Notably, each of the two piloting clinics developed different workflows and needs regarding the same software, depending on their day-to-day operations.[17] For example, orthopaedics ordered PROs to track success of specific surgical procedures such as total joint replacement. Thus, relevant PRO domains such as physical function, pain, and more were ordered for each patient both presurgery and at specified intervals postsurgery. On the other hand, oncology used PROs to regularly monitor symptoms before every scheduled visit. Patients that screened positive for tobacco use were automatically referred to a smoking cessation treatment program.[18] Severe PRO scores were set to trigger alerts to members of a patient's care team; for example, social workers would receive a notice when a patient reported moderate-to-severe depression or anxiety.[19] [20] Similar cancer symptom monitoring programs have since become available through Epic using PROs other than PROMIS CATs.[21]

In terms of project monitoring, an executive steering committee was launched after pilot testing to oversee NMPRO use across the hospital system. The steering committee included 20 members, 4 of whom were involved in the original piloting phase of NMPRO. For the first year of implementation, this group met monthly to actively monitor implementation progress and make iterative changes to software, policy, and educational materials.

The NMPRO software was deployed across 9 specialties and 27 physical locations, supporting over 40 unique questionnaire builds to date. The participating specialties included orthopaedics, oncology, urology, cardiology, dermatology, behavioral health, chronic pain, general surgery, and endocrine surgery. These specialties varied in their patient uptake from 22% (urology) to 57% (endocrine surgery) with an average uptake of 25%. When a clinician requested integration of a new PROMIS measure, IT was able to make it available within three business days. Over 2 years of hospital-wide use, 793 providers collected 70,446 PROs from patients.

Finally, drawing from the work of this project, Epic has developed an application for administering, scoring, and viewing PROMIS CATs.[22] [23] [24] This app makes PROMIS CATs accessible to any hospital using Epic without the need for an existing middleware framework as was required by NMPRO. Other hospital systems have implemented PROMIS CATs in different EHRs through independent software that relies on the AC-API.[13] [25]


#
#

Lessons Learned

Two specific themes emerged as lessons learned over the course of NMPRO: first, communication within the interdisciplinary team was essential to the project's success, and second, the implementation of the project into real-world clinical use required immense effort. Finally, we touch on a few future directions for this work.

NMPRO involved collaboration among two different IT teams (research and informatics), academics of the PROMIS team, and clinicians and clinical staff involved in pilot testing the implementation. We discovered quickly that interdisciplinary teamwork and communication were necessary for the project's success. For example, these groups brought differing experiences and expectations surrounding collaboration in terms of timelines, definitions of success, and amount of resources necessary to accomplish software and clinical goals. These differences were resolved through frequent face-to-face meetings with all team members, a committee charter that explicitly stated the project's goals and their completion timeline, and support from leadership of each group in managing the expectations of their group members. Specifically, the IT team regularly and clearly communicated what deliverables to expect and when, which improved relations with clinical and academic partners. In turn, academic and clinical partners prepared educational materials and specific, actionable feedback to guide software development at appropriate times to address each project goal.

When development ended and the project moved into piloting, our next lesson became clear: real-world use of our software required adjustments to clinical workflow and new thinking on the part of clinicians and patients alike. First, introducing PROs to a clinical practice resulted in significant changes in workflow that required active tailoring for optimal clinic function.[17] For example, incomplete PROs were initially administered by medical assistants in the exam room; after feedback from staff that often did not have sufficient time for PROs in the exam room, PROs were instead administered via tablets in the waiting room. Through intensive qualitative work such as interviews with clinicians and staff in the pilot clinics, we developed planning and change-management tools to help clinics identify potential issues and to develop proactive strategies for addressing them, which are freely and publicly available.[16] Beyond clinical workflow changes, we had to obtain buy-in from clinicians and patients alike. Use of the software was optional and clinicians did not universally participate—for example, 386 oncologists opted into the tobacco use screener PRO compared with 274 who opted into regular symptom monitoring alerts. Existing literature discusses strategies for obtaining clinician buy-in.[16] [26] [27] Other research on electronic PRO implementation demonstrates a strong need to address patient engagement[28] [29]—a challenge we faced as evidenced by our 25% patient PRO uptake rate which, though low, is typical.[30] Within our clinics, medical assistants struggled to communicate the importance of PROs to patients.[17] While strategies for obtaining patient buy-in is covered in previously cited materials, we note that including patient representatives in our design process may have improved the patient experience and thus patient buy-in. It is also worth noting that patients are more open to completing PROs when their clinician references them in their appointment.[8] [31] The challenges of bringing software into real-world use should not be underestimated, and the authors recommend thorough review of existing literature on the clinical implementation of electronic PRO systems before attempting them.

The project also left many areas for future research and development. In the course of the project, several features were identified, but were determined to be out of scope for the initial phase. These included (1) a user-friendly graphic for results over time posted to the patient portal (which has since been implemented[32]); (2) improved workflows to support clinician interpretation of CAT scores[33]; (3) the ability for a clinician to leave a note on a CAT score; (4) integration with the inpatient EHR module. The need for specific features will likely vary within each hospital setting. We were able to address many specific needs early in the project because multidisciplinary stakeholders were included in the design phase; however, desire for new features will inevitably emerge later, only after piloting and implementation. In this sense, designing and implementing electronic PRO software for real-world use will always be an iterative process.


#

Conclusion

NMPRO succeeded in achieving seamless integration of PROMIS CATs into Epic. NMPRO also informed other projects by members of the EASIPRO consortium in integrating PROMIS into other EHRs.[15] [34] [35] NMPRO's software architecture as described here directly informed the development of the Epic PROMIS CAT application, which currently provides all functionality described in this article to any hospital system that uses Epic.

This case report underscores informatics fundamentals: to be successful, clinical projects need to simultaneously address technical details such as data structures as well as sociotechnical issues such as clinical workflow and patient and provider user experience. The inclusion of clinical and academic stakeholders in early design and development improved our process, but developing software for live hospital systems will always involve iteration to some extent.


#

Clinical Relevance Statement

This work provides practical information on the integration of computer-adaptive PROs into a vendor EHR, guidance which many health IT professionals may value as patient-reported outcomes become more commonly used tools in clinical practice.


#

Multiple Choice Questions

  1. What is the most notable benefit of using CATs compared with traditional PRO measures?

    • CATs give detailed information about symptoms.

    • CATs take less time to complete.

    • CATs are simpler to integrate with EHR systems.

    • CATs are more easily interpreted by clinicians.

    The answer is b. Instead of administering an entire set of items, CATs select specific items from an item bank to maximize the information gained. In practice, this usually means that fewer questions are needed to converge upon a patient's true symptom score.

  2. The middleware described in this paper is most important at what stage of the CAT administration process?

    • Creating the list of surveys for the patient to complete

    • Displaying the surveys available to the patient

    • Displaying each specific item

    • Determining if another item should be administered

    The answer is c. Option a. and b. are performed by the EHR and the AC-API is responsible for d.


#
#

Conflict of Interest

The authors do not have any direct financial or personal relationships that conflict with the objectivity of this article's content. However, we wish to mention several indirect relationships. The HealthMeasures/PROMIS team which has supported K.N., N.E.R., M.B., and M.L is funded in part by Assessment Center API licensing fees. D.C. served as President of the PROMIS Health Organization in a noncompensated role. Z.B. is an employee of Phreesia, Inc. and receives equity in the company.

Acknowledgments

We thank Northwestern Medicine Analytics, Quality, and Operations for providing updated usage statistics on NMPRO.

Protection of Human and Animal Subjects

NMPRO was a quality improvement effort on behalf of NM. Consequently, it was not considered Human Subjects Research.


  • References

  • 1 Basch E. Patient-reported outcomes - harnessing patients' voices to improve clinical care. N Engl J Med 2017; 376 (02) 105-108
  • 2 Kotronoulas G, Kearney N, Maguire R. et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. J Clin Oncol 2014; 32 (14) 1480-1501
  • 3 Cella D, Hahn EA, Jensen SE. et al. Patient-Reported Outcomes in Performance Measurement. North Carolina, USA: RTI Press; 2015
  • 4 Gershon RC, Rothrock N, Hanrahan R, Bass M, Cella D. The use of PROMIS and assessment center to deliver patient-reported outcome measures in clinical research. J Appl Meas 2010; 11 (03) 304-314
  • 5 HealthMeasures.net. Intro to PROMIS®. Accessed August 17, 2023 at: https://www.healthmeasures.net/explore-measurement-systems/promis/intro-to-promis
  • 6 Fries JF, Bruce B, Cella D. The promise of PROMIS: using item response theory to improve assessment of patient-reported outcomes. Clin Exp Rheumatol 2005; 23 (5 Suppl 39) S53-S57
  • 7 Kane LT, Namdari S, Plummer OR, Beredjiklian P, Vaccaro A, Abboud JA. Use of computerized adaptive testing to develop more concise patient-reported outcome measures. JBJS Open Access 2020; 5 (01) e0052
  • 8 Long C, Beres LK, Wu AW, Giladi AM. Patient-level barriers and facilitators to completion of patient-reported outcomes measures. Qual Life Res 2022; 31 (06) 1711-1718
  • 9 Segawa E, Schalet B, Cella D. A comparison of computer adaptive tests (CATs) and short forms in terms of accuracy and number of items administrated using PROMIS profile. Qual Life Res 2020; 29 (01) 213-221
  • 10 Cook KF, Jensen SE, Schalet BD. et al. PROMIS measures of pain, fatigue, negative affect, physical function, and social function demonstrated clinical validity across a range of chronic conditions. J Clin Epidemiol 2016; 73: 89-102
  • 11 Gershon R, Rothrock NE, Hanrahan RT, Jansky LJ, Harniss M, Riley W. The development of a clinical outcomes survey research application: Assessment Center. Qual Life Res 2010; 19 (05) 677-685
  • 12 Gensheimer SG, Wu AW, Snyder CF. PRO-EHR Users' Guide Steering Group, PRO-EHR Users' Guide Working Group. Oh, the places we'll go: patient-reported outcomes and electronic health records. Patient 2018; 11 (06) 591-598
  • 13 Bass M, Oncken C, McIntyre AW, Dasilva C, Spuhl J, Rothrock NE. Implementing an Application Programming Interface for PROMIS Measures at three medical centers. Appl Clin Inform 2021; 12 (05) 979-983
  • 14 De Bruijne M, Wijnant A. Improving response rates and questionnaire design for mobile web surveys. Public Opin Q 2014; 78 (04) 951-962
  • 15 Biber J, Ose D, Reese J. et al. Patient reported outcomes - experiences with implementation in a University Health Care setting. J Patient Rep Outcomes 2018; 2 (01) 34
  • 16 Nelson TA, Anderson B, Bian J. et al. Planning for patient-reported outcome implementation: Development of decision tools and practical experience across four clinics. J Clin Transl Sci 2020; 4 (06) 498-507
  • 17 Zhang R, Burgess ER, Reddy MC. et al. Provider perspectives on the integration of patient-reported outcomes in an electronic health record. JAMIA Open 2019; 2 (01) 73-80
  • 18 May JR, Klass E, Davis K. et al. Leveraging patient reported outcomes measurement via the electronic health record to connect patients with cancer to smoking cessation treatment. Int J Environ Res Public Health 2020; 17 (14) 5034
  • 19 Cella D, Garcia SF, Cahue S. et al. Implementation and evaluation of an expanded electronic health record-integrated bilingual electronic symptom management program across a multi-site Comprehensive Cancer Center: The NU IMPACT protocol. Contemp Clin Trials 2023; 128: 107171
  • 20 Garcia SF, Wortman K, Cella D. et al. Implementing electronic health record-integrated screening of patient-reported symptoms and supportive care needs in a comprehensive cancer center. Cancer 2019; 125 (22) 4059-4068
  • 21 Hassett MJ, Cronin C, Tsou TC. et al. eSyM: an electronic health record-integrated patient-reported outcomes-based cancer symptom management program used by six diverse health systems. JCO Clin Cancer Inform 2022; 6: e2100137
  • 22 Epic on FHIR. Connection Hub. Accessed August 17, 2023 at: https://fhir.epic.com/ConnectionHub
  • 23 Sayeed R, Gottlieb D, Mandl KD. SMART Markers: collecting patient-generated health data as a standardized property of health information technology. NPJ Digit Med 2020; 3: 9
  • 24 Wesley DB, Blumenthal J, Shah S. et al. A novel application of SMART on FHIR architecture for interoperable and scalable integration of patient-reported outcome data with electronic health records. J Am Med Inform Assoc 2021; 28 (10) 2220-2225
  • 25 Papuga MO, Dasilva C, McIntyre A, Mitten D, Kates S, Baumhauer JF. Large-scale clinical implementation of PROMIS computer adaptive testing with direct incorporation into the electronic medical record. Health Syst (Basingstoke) 2017; 7 (01) 1-12
  • 26 Hyland CJ, Guo R, Dhawan R. et al. Implementing patient-reported outcomes in routine clinical care for diverse and underrepresented patients in the United States. J Patient Rep Outcomes 2022; 6 (01) 20
  • 27 Austin AM, Carmichael D, Berry S. et al. Chronic condition measurement requires engagement, not measurement alone. J Ambul Care Manage 2019; 42 (04) 295-304
  • 28 Stover AM, Tompkins Stricker C, Hammelef K. et al. Using stakeholder engagement to overcome barriers to implementing patient-reported outcomes (PROs) in cancer care delivery: approaches from 3 prospective studies. Med Care 2019; 57 (Suppl 5 Suppl 1) S92-S99
  • 29 Turchioe MR, Mangal S, Goyal P. et al. A RE-AIM evaluation of a visualization-based electronic patient-reported outcome system. Appl Clin Inform 2023; 14 (02) 227-237
  • 30 Chugh R, Liu AW, Idomsky Y. et al. A digital health intervention to improve the clinical care of inflammatory bowel disease patients. Appl Clin Inform 2023; 14 (05) 855-865
  • 31 Rotenstein LS, Agarwal A, O'Neil K. et al. Implementing patient-reported outcome surveys as part of routine care: lessons from an academic radiation oncology department. J Am Med Inform Assoc 2017; 24 (05) 964-968
  • 32 Perry LM, Morken V, Peipert JD. et al. Patient-reported outcome dashboards within the electronic health record to support shared decision-making: protocol for co-design and clinical evaluation with patients with advanced cancer and chronic kidney disease. JMIR Res Protoc 2022; 11 (09) e38461
  • 33 Cimino JJ. Putting the “why” in “EHR”: capturing and coding clinical cognition. J Am Med Inform Assoc 2019; 26 (11) 1379-1384
  • 34 Harle CA, Listhaus A, Covarrubias CM. et al. Overcoming barriers to implementing patient-reported outcomes in an electronic health record: a case report. J Am Med Inform Assoc 2016; 23 (01) 74-79
  • 35 Burton SV, Valenta AL, Starren J. et al. Examining perspectives on the adoption and use of computer-based patient-reported outcomes among clinicians and health professionals: a Q methodology study. J Am Med Inform Assoc 2022; 29 (03) 443-452

Address for correspondence

Kyle Nolla, PhD, MS
Department of Medical Social Sciences, Northwestern University
625 N Michigan Ave, 21st Floor, Chicago, IL 60611
United States   

Publication History

Received: 25 August 2023

Accepted: 06 December 2023

Accepted Manuscript online:
28 December 2023

Article published online:
21 February 2024

© 2024. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Basch E. Patient-reported outcomes - harnessing patients' voices to improve clinical care. N Engl J Med 2017; 376 (02) 105-108
  • 2 Kotronoulas G, Kearney N, Maguire R. et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. J Clin Oncol 2014; 32 (14) 1480-1501
  • 3 Cella D, Hahn EA, Jensen SE. et al. Patient-Reported Outcomes in Performance Measurement. North Carolina, USA: RTI Press; 2015
  • 4 Gershon RC, Rothrock N, Hanrahan R, Bass M, Cella D. The use of PROMIS and assessment center to deliver patient-reported outcome measures in clinical research. J Appl Meas 2010; 11 (03) 304-314
  • 5 HealthMeasures.net. Intro to PROMIS®. Accessed August 17, 2023 at: https://www.healthmeasures.net/explore-measurement-systems/promis/intro-to-promis
  • 6 Fries JF, Bruce B, Cella D. The promise of PROMIS: using item response theory to improve assessment of patient-reported outcomes. Clin Exp Rheumatol 2005; 23 (5 Suppl 39) S53-S57
  • 7 Kane LT, Namdari S, Plummer OR, Beredjiklian P, Vaccaro A, Abboud JA. Use of computerized adaptive testing to develop more concise patient-reported outcome measures. JBJS Open Access 2020; 5 (01) e0052
  • 8 Long C, Beres LK, Wu AW, Giladi AM. Patient-level barriers and facilitators to completion of patient-reported outcomes measures. Qual Life Res 2022; 31 (06) 1711-1718
  • 9 Segawa E, Schalet B, Cella D. A comparison of computer adaptive tests (CATs) and short forms in terms of accuracy and number of items administrated using PROMIS profile. Qual Life Res 2020; 29 (01) 213-221
  • 10 Cook KF, Jensen SE, Schalet BD. et al. PROMIS measures of pain, fatigue, negative affect, physical function, and social function demonstrated clinical validity across a range of chronic conditions. J Clin Epidemiol 2016; 73: 89-102
  • 11 Gershon R, Rothrock NE, Hanrahan RT, Jansky LJ, Harniss M, Riley W. The development of a clinical outcomes survey research application: Assessment Center. Qual Life Res 2010; 19 (05) 677-685
  • 12 Gensheimer SG, Wu AW, Snyder CF. PRO-EHR Users' Guide Steering Group, PRO-EHR Users' Guide Working Group. Oh, the places we'll go: patient-reported outcomes and electronic health records. Patient 2018; 11 (06) 591-598
  • 13 Bass M, Oncken C, McIntyre AW, Dasilva C, Spuhl J, Rothrock NE. Implementing an Application Programming Interface for PROMIS Measures at three medical centers. Appl Clin Inform 2021; 12 (05) 979-983
  • 14 De Bruijne M, Wijnant A. Improving response rates and questionnaire design for mobile web surveys. Public Opin Q 2014; 78 (04) 951-962
  • 15 Biber J, Ose D, Reese J. et al. Patient reported outcomes - experiences with implementation in a University Health Care setting. J Patient Rep Outcomes 2018; 2 (01) 34
  • 16 Nelson TA, Anderson B, Bian J. et al. Planning for patient-reported outcome implementation: Development of decision tools and practical experience across four clinics. J Clin Transl Sci 2020; 4 (06) 498-507
  • 17 Zhang R, Burgess ER, Reddy MC. et al. Provider perspectives on the integration of patient-reported outcomes in an electronic health record. JAMIA Open 2019; 2 (01) 73-80
  • 18 May JR, Klass E, Davis K. et al. Leveraging patient reported outcomes measurement via the electronic health record to connect patients with cancer to smoking cessation treatment. Int J Environ Res Public Health 2020; 17 (14) 5034
  • 19 Cella D, Garcia SF, Cahue S. et al. Implementation and evaluation of an expanded electronic health record-integrated bilingual electronic symptom management program across a multi-site Comprehensive Cancer Center: The NU IMPACT protocol. Contemp Clin Trials 2023; 128: 107171
  • 20 Garcia SF, Wortman K, Cella D. et al. Implementing electronic health record-integrated screening of patient-reported symptoms and supportive care needs in a comprehensive cancer center. Cancer 2019; 125 (22) 4059-4068
  • 21 Hassett MJ, Cronin C, Tsou TC. et al. eSyM: an electronic health record-integrated patient-reported outcomes-based cancer symptom management program used by six diverse health systems. JCO Clin Cancer Inform 2022; 6: e2100137
  • 22 Epic on FHIR. Connection Hub. Accessed August 17, 2023 at: https://fhir.epic.com/ConnectionHub
  • 23 Sayeed R, Gottlieb D, Mandl KD. SMART Markers: collecting patient-generated health data as a standardized property of health information technology. NPJ Digit Med 2020; 3: 9
  • 24 Wesley DB, Blumenthal J, Shah S. et al. A novel application of SMART on FHIR architecture for interoperable and scalable integration of patient-reported outcome data with electronic health records. J Am Med Inform Assoc 2021; 28 (10) 2220-2225
  • 25 Papuga MO, Dasilva C, McIntyre A, Mitten D, Kates S, Baumhauer JF. Large-scale clinical implementation of PROMIS computer adaptive testing with direct incorporation into the electronic medical record. Health Syst (Basingstoke) 2017; 7 (01) 1-12
  • 26 Hyland CJ, Guo R, Dhawan R. et al. Implementing patient-reported outcomes in routine clinical care for diverse and underrepresented patients in the United States. J Patient Rep Outcomes 2022; 6 (01) 20
  • 27 Austin AM, Carmichael D, Berry S. et al. Chronic condition measurement requires engagement, not measurement alone. J Ambul Care Manage 2019; 42 (04) 295-304
  • 28 Stover AM, Tompkins Stricker C, Hammelef K. et al. Using stakeholder engagement to overcome barriers to implementing patient-reported outcomes (PROs) in cancer care delivery: approaches from 3 prospective studies. Med Care 2019; 57 (Suppl 5 Suppl 1) S92-S99
  • 29 Turchioe MR, Mangal S, Goyal P. et al. A RE-AIM evaluation of a visualization-based electronic patient-reported outcome system. Appl Clin Inform 2023; 14 (02) 227-237
  • 30 Chugh R, Liu AW, Idomsky Y. et al. A digital health intervention to improve the clinical care of inflammatory bowel disease patients. Appl Clin Inform 2023; 14 (05) 855-865
  • 31 Rotenstein LS, Agarwal A, O'Neil K. et al. Implementing patient-reported outcome surveys as part of routine care: lessons from an academic radiation oncology department. J Am Med Inform Assoc 2017; 24 (05) 964-968
  • 32 Perry LM, Morken V, Peipert JD. et al. Patient-reported outcome dashboards within the electronic health record to support shared decision-making: protocol for co-design and clinical evaluation with patients with advanced cancer and chronic kidney disease. JMIR Res Protoc 2022; 11 (09) e38461
  • 33 Cimino JJ. Putting the “why” in “EHR”: capturing and coding clinical cognition. J Am Med Inform Assoc 2019; 26 (11) 1379-1384
  • 34 Harle CA, Listhaus A, Covarrubias CM. et al. Overcoming barriers to implementing patient-reported outcomes in an electronic health record: a case report. J Am Med Inform Assoc 2016; 23 (01) 74-79
  • 35 Burton SV, Valenta AL, Starren J. et al. Examining perspectives on the adoption and use of computer-based patient-reported outcomes among clinicians and health professionals: a Q methodology study. J Am Med Inform Assoc 2022; 29 (03) 443-452

Zoom Image
Fig. 1 Computer-adaptive testing event loop. The survey begins with an assumption of an average T-score of 50, the general population norm. Based on the patient's response to the first question, the next item is selected to give maximal additional information. The cycle is repeated until the confidence in the result is sufficiently high (in other words, the standard error is sufficiently low) or the maximum number of questions is reached.
Zoom Image
Fig. 2 Project design architecture. The main software developed for this project was the custom survey management middleware (top left) housed within an existing Northwestern Memorial Healthcare enterprise service bus (“NMH Framework”). The NMPRO project also developed custom code for multiple aspects of Epic (blue box). NMPRO made use of the newly developed Assessment Center API (green). Patient CAT scores were stored within the NMH Research Database within the NMH Framework for access by the middleware and Epic (bottom). API, application programming interface; NMPRO, Northwestern Medicine Patient-Reported Outcomes; PROMIS, Patient-Reported Outcomes Measurement Information System.
Zoom Image
Fig. 3 Swimlane diagram of the initiation of a PROMIS CAT survey. CAT, computer-adaptive test; EHR, electronic health record; PRO, patient-reported outcome measures.
Zoom Image
Fig. 4 Data model for integration of CATs into conventional EHR format. A dummy questionnaire is created in the EHR that duplicates all items in the CAT item bank. The CAT is modeled as a fixed-length PRO with many unanswered questions. CAT, computer-adaptive test; EHR, electronic health record; PRO, patient-reported outcome.
Zoom Image
Fig. 5 Screenshot of PROMIS data displayed by native EHR survey module, as seen by a provider. This display went through several iterations based on feedback from clinicians during software development. EHR, electronic health record; PROMIS, Patient-Reported Outcomes Measurement Information System.
Zoom Image
Fig. 6 A screenshot of a PROMIS question as seen by a patient. This display went through several iterations based on feedback from psychometricians during software development. PROMIS, Patient-Reported Outcomes Measurement Information System.