J Knee Surg 2014; 27(02): 167-168
DOI: 10.1055/s-0034-1371895
Letter to the Editor
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Response to Letter to the Editor on: Validation Study of an Electronic Method of Condensed Outcomes Tools Reporting in Orthopaedics (J Knee Surg 2013;26:445–452)

Jack Farr
1   Department of Orthopaedics, Indiana Orthopaedic Hospital, Indianapolis, Indiana
,
Nikhil Verma
2   Department of Orthopaedics, Rush University Medical Center, Chicago, Illinois
,
Brian J. Cole
2   Department of Orthopaedics, Rush University Medical Center, Chicago, Illinois
› Author Affiliations
Further Information

Publication History

Publication Date:
10 March 2014 (online)

We appreciate the thoughtful comments from Roos and co-authors to the editor because they accurately highlight a need for caution related to the interpretation of patient-reported outcome (PRO) instruments as they have each made important contributions to this field.

First, to address the issue of conflict of interest and financial disclosure: the authors have no consulting, royalty, or other financial agreements with Universal Research Solutions, LLC (URS) (Columbia, Missouri), the developer of the OBERD (Outcome-Based Electronic Research Database) System. URS, however, did fund this study. Currently, Dr. Farr does not use OBERD for standard PROs and continues to use SOCRATES (Standardised Orthopaedic Clinical Research and Treatment Evaluation Software, Rozelle, NSW, Australia) to collect PRO data, a program that includes all standard PRO outcome instruments in their entirety (http://www.socratesortho.com/scores). Drs. Cole and Verma and members of their practice (Midwest Orthopedics, Rush University Medical Center, Chicago, Illinois) also use SOCRATES and a custom long-form PRO, but are in the process of transitioning to OBERD for use in many aspects of patient care including patient registration, patient education, and collection of PROs.

The reason the study was initiated was due to patient dissatisfaction with the current length and volume of PRO forms, which may take up to 60 minutes to complete and has no ability to integrate into the current electronic medical record. The rapidly changing medical environment will require collection and reporting of patient-centric outcome measures to validate quality and care and interventions by medical providers. The reality is that collection of such data are burdensome for both patients and providers and requires the innovation and evolution of electronic solutions for data procurement, management, and storage. Thus, in recognition of our patients' concerns and waning success at obtaining complete datasets, we differ with the view of Roos et al that combining forms to reduce patient burden is “unnecessary” when multiple forms are required.

As the investigators often study patients with complex knee problems, numerous PRO tools are needed to capture data—because for any particular intervention, multiple PROs may be required to capture different aspects of patient outcomes such as quality of life, joint specific outcome measures, and return to sport. In addition, the collection of multiple PROs is required to compare current interventions to outcomes reported previously in the literature. To maintain uniformity and to assist in patient flow in high volume clinical practices, all major PRO instruments are used for each patient. In addition to the time to complete the forms, patients often omit questions or have difficulty understanding the question as it pertains to their current situation (they are on crutches dictated by a study, not necessarily, their pain). Patient attention and diligence are critical to obtaining good data. Willing and meaningful patient cooperation is a prerequisite for this, and shorter forms with built-in explanations may be one key in obtaining improved patient compliance.

Measurement is always subject to random variability, and meaningful use of data requires knowledge of its inherent uncertainties. No scientific data can be understood without some assessment of the accuracy and validation of the data gathering instrument. Test-retest comparisons are an important way to assess this uncertainty for outcome instruments. Ideally this information should be used calculate a minimum detectable change for the instrument, and this would be our preferred starting point for comparisons of administration modes for the instrument. If paper and electronic results fall within the inherent sensitivity of the instrument, then they cannot be distinguished.

However, it is all too common for validation studies to report only the (Pearson) correlation between test and retest data, a much weaker statistic. Indeed, as pointed out in our paper, the correlation was the only comparison between test and retest we could find that was reported for all of the instruments in question; hence it is what we used for our comparison between the original paper forms and the OBERD rendition of the forms. As Roos et al observe, correlation does not prove equality between two administrations, but it has, nonetheless, frequently been accepted in the literature as evidence for repeatability.

Our paper implicitly assumes that outcome instruments measure underlying “latent traits,” not otherwise accessible for measurement, but representing objective facts. Thus, the questionnaire is an attempt to communicate about a basic concept, and thus a robust instrument should not be sensitive to the particular choice of words. If it were, then translations into different languages would be invalid. Judging two wordings equivalent in a single language is much less demanding than judging the equivalence of expression in two different languages. Moreover, approaches such as the NIH-sponsored Patient Reported Outcomes Measurement Information System (PROMIS), which generates scores from questions selected on-the-fly from an item bank, provide entirely different forms which are nevertheless able to provide equivalent scores.

While we recognize that more research is needed in a timely fashion to fully understand all of the relevant issues, we strongly disagree with the concept that no attempts should be made to combine and condense currently used PROs, provided that appropriate validation studies are completed. A recent report[1] motivated by similar concerns concluded that condensing methods were valid in a study of shoulder instruments. By focusing on one particular component from a condensed form, they were able to compare absolute scores as well as correlations. We would encourage Drs. Roos, Irrgang, and Lysholm, as pioneers in the development of PRO tools used worldwide, to engage in the work that will be required to develop electronic data collection tools that are patient friendly.

Future investigation is certainly needed to optimize the process when using multiple outcomes tools—especially, to address patient objections to “PRO overload” and facilitate EMR data management.

 
  • Reference

  • 1 Smith MJ, Marberry KM. Reliability of a novel. web-based, shoulder-specific patient-reported outcome instrument. Curr Orthop Pract 2013; 24 (1) 64-67