Appl Clin Inform 2019; 10(02): 175-179
DOI: 10.1055/s-0039-1679960
Research Article
Georg Thieme Verlag KG Stuttgart · New York

Common Laboratory Results Frequently Misunderstood by a Sample of Mechanical Turk Users

Nabeel Qureshi
1   RAND Corporation, Santa Monica, California, United States
,
Ateev Mehrotra
2   RAND Corporation, Boston, Massachusetts, United States
3   Harvard Medical School, Boston, Massachusetts, United States
,
Robert S. Rudin
2   RAND Corporation, Boston, Massachusetts, United States
,
Shira H. Fischer
2   RAND Corporation, Boston, Massachusetts, United States
› Author Affiliations
Further Information

Publication History

19 August 2018

17 January 2019

Publication Date:
13 March 2019 (online)

Abstract

Objectives More patients are receiving their test results via patient portals. Given test results are written using medical jargon, there has been concern that patients may misinterpret these results. Using sample colonoscopy and Pap smear results, our objective was to assess how frequently people can identify the correct diagnosis and when a patient should follow up with a provider.

Methods We used Mechanical Turk—a crowdsourcing tool run by Amazon that enables easy and fast gathering of users to perform tasks like answering questions or identifying objects—to survey individuals who were shown six sample test results (three colonoscopy, three Pap smear) ranging in complexity. For each case, respondents answered multiple choice questions on the correct diagnosis and recommended return time.

Results Among the three colonoscopy cases (n = 642) and three Pap smear cases (n = 642), 63% (95% confidence interval [CI]: 60–67%) and 53% (95% CI: 49–57%) of the respondents chose the correct diagnosis, respectively. For the most complex colonoscopy and Pap smear cases, only 29% (95% CI: 23–35%) and 9% (95% CI: 5–13%) chose the correct diagnosis.

Conclusion People frequently misinterpret colonoscopy and Pap smear test results. Greater emphasis needs to be placed on assisting patients in interpretation.

Protection of Human and Animal Subjects

This project was deemed exempt by RAND's institutional review board.


Supplementary Material

 
  • References

  • 1 Patel V, Johnson C. Individuals' use of online medical records and technology for health needs. ONC Data Brief 40, 2018. Available at: https://www.healthit.gov/sites/default/files/page/2018-03/HINTS-2017-Consumer-Data-Brief-3.21.18.pdf . Accessed February 5, 2019
  • 2 Blumenthal D, Tavenner M. The “meaningful use” regulation for electronic health records. N Engl J Med 2010; 363 (06) 501-504
  • 3 Delbanco T, Walker J, Bell SK. , et al. Inviting patients to read their doctors' notes: a quasi-experimental study and a look ahead. Ann Intern Med 2012; 157 (07) 461-470
  • 4 Zikmund-Fisher BJ, Exe NL, Witteman HO. Numeracy and literacy independently predict patients' ability to identify out-of-range test results. J Med Internet Res 2014; 16 (08) e187
  • 5 Pillemer F, Price RA, Paone S. , et al. Direct release of test results to patients increases patient engagement and utilization of care. PLoS One 2016; 11 (06) e0154743
  • 6 Fraccaro P, Vigo M, Balatsoukas P. , et al. Presentation of laboratory test results in patient portals: influence of interface design on risk interpretation and visual search behaviour. BMC Med Inform Decis Mak 2018; 18 (01) 11
  • 7 Centers for Disease Control and Prevention (CDC). Colorectal cancer screening capacity in the United States. 2016 [cited June 14, 2018]. Available at: https://www.cdc.gov/cancer/dcpc/research/articles/crc_screening_model.htm . Accessed February 6, 2019
  • 8 Sirovich BE, Welch HG. The frequency of Pap smear screening in the United States. J Gen Intern Med 2004; 19 (03) 243-250
  • 9 Mortensen K, Hughes TL. Comparing Amazon's Mechanical Turk platform to conventional data collection methods in the health and medical research literature. J Gen Intern Med 2018; 33 (04) 533-538
  • 10 Peer E, Vosgerau J, Acquisti A. Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behav Res Methods 2014; 46 (04) 1023-1031
  • 11 Buhrmester M, Kwang T, Gosling SD. Amazon's Mechanical Turk: a new source of inexpensive, yet high-quality, data?. Perspect Psychol Sci 2011; 6 (01) 3-5
  • 12 Peters E, Hibbard J, Slovic P, Dieckmann N. Numeracy skill and the communication, comprehension, and use of risk-benefit information. Health Aff (Millwood) 2007; 26 (03) 741-748
  • 13 Lalor JP, Wu H, Chen L, Mazor KM, Yu H. ComprehENotes, an instrument to assess patient reading comprehension of electronic health record notes: development and validation. J Med Internet Res 2018; 20 (04) e139
  • 14 Short RG, Middleton D, Befera NT, Gondalia R, Tailor TD. Patient-centered radiology reporting: using online crowdsourcing to assess the effectiveness of a web-based interactive radiology report. J Am Coll Radiol 2017; 14 (11) 1489-1497
  • 15 Department of Health and Human Services. Innovative approaches to studying cancer communication in the new media environment (R21). 2018 [cited October 23, 2018]. Available at: https://grants.nih.gov/grants/guide/pa-files/par-16-248.html . Accessed February 6, 2019
  • 16 Paolacci G, Chandler J, Ipeirotis PG. Running experiments on Amazon Mechanical Turk. Judgm Decis Mak 2010; 5 (05) 411-419
  • 17 Stewart N, Chandler J, Paolacci G. Crowdsourcing samples in cognitive science. Trends Cogn Sci 2017; 21 (10) 736-748
  • 18 Yank V, Agarwal S, Loftus P, Asch S, Rehkopf D. Crowdsourced health data: comparability to a US national survey, 2013–2015. Am J Public Health 2017; 107 (08) 1283-1289
  • 19 Dreyfuss E. A bot panic hits Amazon's Mechanical Turk. Wired. 2018 (August 17). Available at: https://www.wired.com/story/amazon-mechanical-turk-bot-panic/ . Accessed February 6, 2019
  • 20 Devine EG, Waters ME, Putnam M. , et al. Concealment and fabrication by experienced research subjects. Clin Trials 2013; 10 (06) 935-948