Keywords medication reconciliation - clinical information systems - quality improvement - medication errors - medication adherence - clinical informatics - consumer health informatics - medication review
Background and Significance
Background and Significance
Any incongruity in prescription information between two or more different information sources may result in a medication discrepancy.[1 ] The risk of an individual discrepancy is relatively small.[2 ] However, in aggregate, medication discrepancies during care transitions and clinic visits are a common cause of preventable adverse drug events (ADEs) and patient harm.[3 ]
[4 ]
[5 ]
[6 ]
[7 ]
[8 ]
[9 ]
[10 ]
[11 ] Unintentional discrepancies in allergy and medication information contribute to nearly half a million hospitalizations and cost the United States' (U.S.) health care system upwards of $1 billion annually.[12 ]
[13 ]
[14 ]
[15 ]
According to the Institute of Healthcare Improvement, medication reconciliation (MR) is the “process of identifying the most accurate list of all medications a patient is taking…and using this list to provide correct medications for patients anywhere within the healthcare system.”[16 ] Studies show that standardized MR programs reliably identify discrepancies and reduce medical error.[5 ]
[17 ]
[18 ]
[19 ]
[20 ]
[21 ] While quality advocacy organizations and regulatory agencies recommend implementing scalable systems-based solutions, health care organizations typically struggle with an array of individual- and system-based implementation barriers.[3 ]
[16 ]
[22 ]
[23 ]
[24 ]
[25 ]
[26 ]
[27 ]
[28 ]
[29 ]
[30 ]
Clinicians compiling medication histories under time pressure often adopt manual workarounds to overcome poor electronic health record (EHR) usability and lack of data interoperability.[29 ]
[30 ]
[31 ]
[32 ]
[33 ] They cite challenges with tight schedules, competing care priorities, and limited patient reliability.[30 ] This is unsurprising given that a medication history can require 20 minutes and a high-quality reconciliation can take up to 80 minutes.[33 ] Therefore, many clinicians dismiss MR as a set of “administrative accounting tasks” and skip reviewing medications with patients altogether.[34 ] By contrast, quality MR programs foster patient-centered discussion, pull data from multiple sources, and promote interdisciplinary communication.[23 ]
[35 ] Clearly, there is a need for user-centered tools that assist with history collection and that improve the efficiency, consistency, and accuracy of MR.[17 ]
[26 ]
[27 ]
[36 ]
[37 ]
[38 ]
[39 ]
[40 ]
[41 ]
Problem Statement
The Veterans Affairs (VA) initiated an MR campaign to establish standards, promote change, and measure implementation effectiveness.[42 ] The campaign required clinicians to collect a medication history at every encounter and compare with facility documentation. Many clinics failed to meet these guidelines. Clinicians often found it difficult to collect or update medication information, opting instead to copy and paste existing EHR medication lists into their notes. In a previous survey of local primary care providers, respondents identified three barriers affecting MR: poor EHR design, inflexible workflows, and insufficient time.[29 ] We therefore developed a medication history collection software to engage patients, improve data integration, and streamline the process.[38 ]
We modeled our Automated Patient History Intake Device (APHID) software after interactive self-service functionality commonly featured on retail Web sites and commercial air travel check-in kiosks.[38 ]
[43 ] Patients used an APHID-enabled kiosk located in the clinic waiting room to check-in for their appointments and review the names and pictures of their prescriptions. Clinicians then imported the results into the EHR for reference during the medication interview.
Although our initial piloting efforts demonstrated the viability of this approach, we had not yet measured its accuracy compared with traditional methods of history collection such as interviews and paper questionnaires.[29 ]
[38 ]
[43 ] Few studies have demonstrated the effectiveness of self-service software or multimedia tools for gathering reliable medication information.[44 ]
[45 ]
[46 ]
[47 ]
[48 ]
[49 ] In small nonrandomized pilot studies, Hornick et al and Kimmel et al found that medication images improved patient recall for certain classes of medications.[50 ]
[51 ] Before these strategies can be recommended, high-quality studies are needed to inform design and establish the accuracy of a self-reported medication history.
Objective
This study compared the diagnostic accuracy of the APHID collection process to a paper-based collection process. To determine which of these methods was more effective, we compared both strategies to the EHR list and a reference standard. For our reference standard, we assembled a best possible medication history (BPMH) ([Table 1 ]): a systematic clinician-conducted history using several information sources.[34 ] We hypothesized that participants using APHID would report more medication discrepancies than those using a paper list. We also hypothesized that the participant-reviewed APHID list would be more accurate than the participant-reviewed paper list when compared with the BPMH.
Table 1
Glossary of terms used in this article
Term
Definition
Adverse drug event (ADE)
Allergic reactions, adverse effects, or unintentional overdoses[6 ]
Automated Patient History Intake Device (APHID)
Veterans' Affairs (VA) software that supports collection and documentation of a medication history[38 ]
APHID list
A medication list generated by the APHID software that includes current prescriptions written at the study facility, 6 months of expired prescriptions written at the study facility, non-VA prescriptions and nonprescription medications documented at the study facility, and prescriptions written at other VA facilities[38 ]
Best possible medication history (BPMH)
Clinician-gathered medication history adjudicated with the participant-reviewed blinded list; reference standard[34 ]
Clinician-gathered medication history
A medication history collected by the clinician researcher while referencing the EHR and inspecting medications brought in by the patient
Electronic health record (EHR) list
List of prescription information in the VA electronic health record
Expired medications
Electronic prescriptions that have passed a predefined expiration date set by the EHR
Kiosk
Self-service hardware equipped with a touch-screen and installed with APHID software[38 ]
[45 ]
Medication discrepancy
Any unintended incongruity between prescription lists from two sources[5 ]
Medication errors
Failure in prescribing or treatment process that can result in patient harm[125 ]
Medication history
A process of identifying the list of all medications a patient is taking by interviewing the patient/family and reviewing available documentation[9 ]
Medication information
Information about prescribing/administration/consumption of medications
Medication reconciliation
A process of identifying the most accurate list of all medications a patient is taking and using this list to provide correct information across the continuum of care[16 ]
Non-VA medications
Medications procured outside the VA by the patient and documented in the EHR by a clinician
Paper list
A medication list that includes current prescriptions written at the study facility, 6 months of expired prescriptions written at the study facility, non-VA prescriptions and nonprescription medications documented at the study facility, and prescriptions written at other VA facilities
Participant adherence
Extent to which medications are taken as directed
Participant-reviewed APHID list
APHID list of medications with participant adherence documented
Participant-reviewed blinded list
Form created by the research coordinator listing medications and participant-furnished adherence history
Participant-reviewed paper list
Paper list of medications with participant adherence documented[1 ]
Root cause (participant-based)
Discrepancy caused by a factor under the patient's control[1 ]
Root cause (system-based)
Discrepancy caused by a clinician or health system factor
VA medications (local)
Electronic prescriptions written and documented in the study facility EHR
VA medications (remote)
Electronic prescriptions written at VA facilities other than the study facility
Materials and Methods
Theory
We based our study design on two well-established theories: (1) the Systems Engineering to Improve Patient Safety (SEIPS) framework; and (2) the Pictorial Superiority Effect (PSE) ([Fig. 1 ]). Carayon et al's SEIPS framework argues that system constructs (i.e., people, technologies, workflow, and culture) dictate health system processes and clinical outcomes.[52 ] In our adaptation of the framework, MR is a cyclical macroprocess composed of linked subprocesses (e.g., history collection, data adjudication, discrepancy resolution).[53 ] The quality of MR can only be as effective as the initial history.[35 ]
[53 ] We therefore designed our intervention to address system constructs supporting history collection.[23 ]
[52 ]
[53 ]
[54 ]
Fig. 1 Adaptation of Carayon's systems engineering to improve patient safety (SEIPS) framework.
The PSE contends that humans encode, store, and retrieve images from memory more easily than text or auditory information.[55 ]
[56 ]
[57 ] Studies indicate patient education materials using pictures and pictograms can influence health literacy and improve comprehension.[49 ]
[51 ]
[58 ]
[59 ]
[60 ]
[61 ]
[62 ] Since it is rare for patients to bring their medications to clinic, we hypothesized that providing patients with medication images would reduce the number of errors caused by “look-alike” and “sound-alike” medications.[63 ]
[64 ]
[65 ]
Description of the Technology
Our technology consists of three main components: (1) a self-service kiosk; (2) the APHID medication history collection software; and (3) an interface to the facility EHR ([Fig. 2 ]).[29 ]
[38 ]
[43 ] For a complete description of the technology, please refer to our technology development and deployment manuscript.[38 ]
Fig. 2 Representative screenshot and output from Automated Patient History Intake Device (APHID).
The APHID software has access to all prescription information in the EHR and can use stored metadata to match each medication with a digital image. Our organization manages the medication supply chain for most prescriptions; medications written by VA prescribers are typically dispensed from VA facilities or regional mail-out distribution centers. The EHR stores medication dispense dates, prescription refill histories, medication inventory numbers, and U.S. National Drug Code (NDC) numbers. The NDC numbers are unique 10-digit, 3-segment numbers assigned by the U.S. Food and Drug Administration to all drugs distributed in the United States. APHID uses a combination of NDC numbers and dispensing data to match an image with each prescription.[66 ] To assemble a medication list for patient review, APHID retrieves prescription data from all VA facilities and pairs each medication name and instructions for use with a single digital photograph of the prescription. If a patient has received several prescriptions for the same drug, APHID uses the image associated with the last dispense date.
It is customary for VA clinicians to document prescriptions from non-VA practitioners and nonprescription medications reported by the patient. Approximately 10 to 30% of prescriptions are procured outside the VA. APHID will display this information when available in the EHR. However, the VA does not exchange health data with community drug dispensaries. Thus, APHID cannot match images to prescriptions procured outside the VA.
Subjects and Settings
We recruited U.S. Veteran patients from three primary care clinics associated with the VA Portland Healthcare System (VAPORHCS). VHAPORHCS is a 300-bed tertiary care hospital with eight associated ambulatory care centers located in northern Oregon and southern Washington states. It is part of the Veterans Health Affairs, a nationwide care network of over 150 hospitals.[67 ]
[68 ] Most Veterans are male (94%), older than the average civilian patient, have a greater number of medical comorbidities, and use more medications.
Study Design
We conducted a prospective parallel-randomized, controlled, single-blind study of the APHID software. Using the BPMH as a reference standard, we compared the discrepancy counts reported using a paper list with those reported by APHID ([Fig. 3 ]). We designed our study as per the CONSORT guidelines for reporting clinical trials and STARD guidelines for reporting studies of diagnostic tests.[69 ]
[70 ]
[71 ] For a detailed description of the methods and instruments, please refer to our previously published protocol.[53 ]
Fig. 3 Protocol for Trial.
From June 2009 through December 2011, a research coordinator contacted all patients scheduled in participating clinics and screened them for inclusion in the study. Patients were eligible for inclusion if they were over the age of 18 years, taking three or more medications, and had completed at least one appointment in the past. Exclusion criteria included: the inability to read or speak English; the presence of cognitive impairment or mental illness; visual impairment; and physical impairment that might preclude use of a mouse or keyboard. The research coordinator asked participating patients to bring in all their medications to the appointment.
On the day of the study appointment, the research coordinator confirmed the study participant's eligibility and then consented, randomized, and assigned the participant to one of the two treatments ([Fig. 3 ], step 2). A member of the research team determined the treatment assignment using a computer-based random number generator and placed the output in a sealed envelope. The research coordinator did not know or have access to the treatment assignment until the time of presentation when the envelope was opened.
The research coordinator asked participants to review, correct, and amend their medication lists ([Fig. 3 ], step 3). Participants in the intervention arm used APHID at a standard computer workstation, whereas participants in the control arm used paper on a clipboard. Participants using APHID completed an onscreen questionnaire asking them about (1) their prescriptions dispensed by the study facility; (2) other VA prescriptions; (3) nonprescription medications; and (4) 6 months of expired prescriptions. Each prescription was displayed, one at a time, with an image where available. Participants were asked to indicate adherence using one of four structured response buttons (“Yes, taking as written above; No, taking differently; No, NOT taking; Unsure”). Participants then added any new or missing medications.
Participants in the control arm reviewed a paper list that included (1) prescriptions dispensed by the study facility; (2) other VA prescriptions; (3) nonprescription medications; and (4) 6 months of expired prescriptions. This control included more information than what was available in the facility EHR and was more stringent than the “usual care” practice of importing an automated list into clinic notes. We selected this control to isolate the effects of medication pictures and self-service software.[17 ]
[23 ]
[24 ]
[33 ]
[37 ]
[67 ]
[72 ]
[73 ]
[74 ] It was crucial that the paper and APHID lists contained the same medications for review and the same number of opportunities to report a discrepancy.
The research coordinator asked the control group participants to place a “yes” on the sheet next to medications they were taking and “no,” “differently,” or “unsure” next to medications that they were not taking, taking differently, or had a question about. We coded any response other than “yes” as a discrepancy for active medications. We coded any response other than “no” as a discrepancy for expired medications. Participants listed additional medications at the bottom of the form. We could not blind participants to treatment status. The research coordinator then transcribed participant responses onto a paper form (i.e., participant-reviewed blinded list) to mask treatment assignment from the researcher completing the interview ([Fig. 3 ]; step 4).
Clinician researchers (an internal medicine physician, a clinical pharmacist, and an advanced practice nurse) were trained to collect a medication history using an interview script. The researchers then independently interviewed standardized patients to eliminate variation in interview technique. The performance characteristics were identical between researchers after two practice cycles.
In the study, a blinded clinician researcher met with each participant and completed a medication history using the interview script ([Fig. 3 ]; step 5). This is the most common method used in MR studies to establish a reference standard.[13 ]
[17 ]
[18 ]
[67 ]
[75 ]
[76 ]
[77 ]
[78 ]
[79 ] The researchers were also instructed to review the EHR and inspect the medication containers. Then the researcher recorded a discrepancy on a spreadsheet if the participant was not taking a medication associated with a current prescription, taking a medication associated with an expired prescription, taking a medication differently than instructed, or taking a new medication.
Our reference standard (i.e., the BPMH) included an additional step and an additional information source ([Fig. 3 ], step 6). The BPMH included the clinician-gathered medication history, a review of EHR pharmacy records, a prescription vial or direct pill inspection, and a double-check using the participant-reviewed blinded list. The research coordinator furnished the clinician researcher with the blinded list. The clinician then completed the “double-check” with the patient, adjudicating mismatches identified between the clinician history and the blinded list. The researcher then furnished the BPMH to the primary care provider.
Our institutional review board (IRB) required the adjudication step ([Fig. 3 ], step 6) for safety purposes; it was crucial to disclose all information sources to the primary care team. Furthermore, we used both histories—the clinician-gathered medication history and the BPMH—during statistical analysis to screen for differences or trends between the first and second clinician review.
Researchers classified all discrepancies by descriptive type using a typology adapted from Pippins et al.[80 ] Researchers also classified discrepancies by root cause using an instrument adapted from Orrico and Smith et al.[1 ]
[81 ] A blinded clinician panel then assigned risk scores using a classification scheme adapted from Pippins et al and Wong et al (see [Appendix Fig. A1 ] for risk assessment protocol)[80 ]
[82 ] ([Fig. 3 ], step 7). We used sample sets of discrepancies collected during piloting to train clinician raters on the instrument and calibrate responses.
Analysis
To determine discrepancy rates, we compared the participant-reviewed lists against the EHR list and then calculated the proportion of medications with a discrepancy ([Fig. 3 ], step 1–step 2). Similarly, we compared the participant-reviewed lists against the BPMH and calculated the proportion of medications with a discrepancy ([Fig. 3 ], step 2–step 6). The primary outcome measure was the difference in discrepancy rates between each arm with respect to the EHR list. Our secondary outcome measures were the difference in discrepancy rates and high-risk discrepancy rates between each arm compared with the BPMH. Assuming a discrepancy base rate of 25% per participant list, our power calculations indicated that we needed a sample size of 210 participants to detect a difference of 15% in detection rates between treatment arms.[43 ]
[53 ] We used a crossed random-effects model (three phases of evaluation crossed with participant lists) to account for discrepancy-clustering effects at the participant level. Discrepancy rates were calculated for two sets of comparisons: (1) treatment arm versus EHR; and (2) treatment arm versus BPMH. We completed a poststratified analysis to compare differences in discrepancy counts between treatment arms correcting for the number of medications between treatment arms (i.e., opportunities to detect an error). We also computed intercorrelation coefficients (ICCs) of absolute agreement by treatment arm for individual discrepancies at the level of the medication list item across evaluation phases (clinician-gathered medication history and BPMH) to assess whether one of the treatment arms showed significantly more agreement than the other with respect to a given medication list item. No gross difference in agreement was seen between the treatment arms.
We assessed the accuracy of treatment methods relative to the BPMH by comparing the discrepancy status (either “yes” or “no” according to the BPMH within each treatment arm) of each medication list item that was identified by both the BPMH and the treatment. We then tallied the counts of (“yes,” “yes”), (“yes,” “no”), (“no,” “yes”), and (“no,” “no”), where the first response in each pair was from the BPMH and the second response was from a treatment method. This yielded two sets of four counts each; arranged in two 2 × 2 cross-tabulation tables and individually analyzed on standard diagnostic agreement metrics. We assumed that the BPMH represented the true status and the treatment method represented the test status. The metrics we report include sensitivity (fraction of test positives among true positives), specificity (fraction of test negatives among true negatives), positive predictive value (fraction of true positives among test positives), negative predictive value (fraction of true negatives among test negatives), positive likelihood ratio (sensitivity divided by false positive fraction among true negatives), and negative likelihood ratio (false negative fraction among true positives, divided by specificity), where the likelihood ratios are weighted by the prevalence odds. Confidence intervals were calculated for each metric on each cross-tabulation using exact binomial distributions.
Results
We assessed 614 patients for eligibility; 220 participants were enrolled and randomized for the study ([Fig. 4 ]). The study pilot involved eight participants and three participants were withdrawn, leaving 209 participants included in the final analysis. No participants were lost to follow-up. There were no incidents of accidental unblinding treatment status for the clinician interviewers.
Fig. 4 Flowchart for patient enrollment, randomization, and analysis.
Participant sample characteristics and descriptive statistics are reported in [Table 2 ]. Most enrolled participants were male, with a mean age of 66.5 years, and an estimated 5.6 chronic medical illnesses. Each participant had an average of 11.5 active prescriptions. When accounting for recently expired and nonprescription medications, we reviewed an average of 16.7 medications per participant.
Table 2
Descriptive statistics of participant sample
Characteristic
Control
Intervention
Total
Participants, count (%)
102
(48.8)
107
(51.2)
209
(100.0)
Male gender, count (%)
99
(97.1)
101
(94.4)
200
(95.7)
Age (y), mean (SD)
67.6
(12.3)
65.5
(12.2)
66.5
(12.3)
Chronic conditions, mean (SD)
5.5
(2.0)
5.7
(2.1)
5.6
(2.1)
Medications, mean (SD)
16.8
(7.1)
16.7
(8.5)
16.7
(7.8)
Current prescriptions in EHR
11.5
(5.7)
11.5
(6.0)
11.5
(5.9)
Expired prescriptions in EHR
2.5
(2.4)
2.9
(3.1)
2.7
(2.8)
Newly reported medications not in EHR
2.9
(2.6)
2.2
(2.7)
2.5
(2.7)
Education, count (%)
Less than 12th grade
6
(5.9)
10
(9.3)
16
(7.7)
High school graduate
26
(25.5)
22
(20.6)
48
(23.0)
Some college (no degree)
26
(25.5)
23
(21.5)
49
(23.4)
College degree
44
(43.1)
52
(48.6)
96
(45.9)
Abbreviations: EHR, electronic health record; SD, standard deviation.
Descriptive statistics for all discrepancies detected are outlined in [Table 3 ]. Using all information sources, our team identified 3,500 medications and 1,435 discrepancies. We detected 530 discrepancies using the paper list and 594 discrepancies using the APHID list. Of the 1,435 discrepancies identified by the BPMH, 657 (46%) were high or very high risk. We traced 47% of the discrepancies to a system-based root cause (i.e., clinical documentation errors) and the remainder to a participant-based root cause (e.g., nonadherence). VA medications accounted for 49% of the discrepancies by prescription status, expired VA medications accounted for 15%, and medications not in the EHR accounted for 37%. We did not identify any differences between treatment arms.
Table 3
Descriptive statistics for all discrepancies identified using any method (N = 3,500)
Classification
Paper count (%)
APHID count (%)
BPMH count (%)
Total medication list items
1,717
1,783
3,500
Total discrepancies detected
530
594
1,435
Discrepancies sorted by potential ADE risk[a ]
High or very high risk
244
(46.0)
298
(50.2)
657
(45.8)
Low or medium risk
284
(53.6)
296
(49.8)
775
(54.0)
Missing risk evaluation
2
(0.4)
0
(0.0)
3
(0.2)
Discrepancies sorted by root cause
System-based root cause
224
(42.3)
259
(43.6)
674
(47.0)
Participant-based root cause
306
(57.7)
335
(56.4)
761
(53.0)
Discrepancies sorted by prescription status
VA medications
271
(51.1)
326
(54.9)
696
(48.5)
Expired VA medications
97
(18.3)
136
(22.9)
211
(14.7)
Medications not in EHR
162
(30.6)
132
(22.2)
528
(36.8)
Discrepancies sorted by prescription source
Local VA facility
294
(55.5)
359
(60.4)
764
(53.2)
Remote VA facility
2
(0.4)
30
(5.1)
32
(2.2)
Non-VA
234
(44.2)
205
(34.5)
639
(44.5)
Discrepancies sorted by classification
Omission
256
(48.3)
253
(42.6)
705
(49.1)
Commission
188
(35.5)
219
(36.9)
415
(28.9)
Dose
36
(6.8)
45
(7.6)
128
(8.9)
Frequency
46
(8.7)
75
(12.6)
173
(12.1)
Substitution
4
(0.8)
2
(0.3)
14
(1.0)
Abbreviations: ADE, adverse drug event; APHID, Automated Patient History Intake Device; BPMH, best possible medication history; EHR, electronic health record; VA, Veterans Affairs.
Note: A total of 253 medications were only identified by the BPMH and are reflected in the total count. All percentages represent percent of discrepancies within the treatment group.
a Clinicians did not have enough contextual information to confidently assign a risk category.
There were no statistically significant differences in the rate of discrepancies reported (i.e., primary outcome) for each treatment arm when compared with the study facility EHR ([Table 4 ]). An average of 35% (0.35 ± 0.20) of medications on each list included a discrepancy (p = 0.89); 15% of all medications in the control arm and 17% of all medications in the intervention arm included a high-risk discrepancy (this corresponds to 43% of all discrepancies in the control group and 49% of all discrepancies in the intervention group).
Table 4
Disagreement between the EHR list, treatment lists, and the reference standard
Comparison
Paper
(102 participants)
APHID
(107 participants)
Raw difference
p -Value
Treatment versus EHR # medications
1,574
1,673
99
Proportion of list with discrepancies
Total discrepancies, mean (SD)
0.35
(0.20)
0.35
(0.19)
0.00
0.89
High and very high risk, mean (SD)
0.15
(0.13)
0.17
(0.15)
0.02
0.34
System-based, mean (SD)
0.14
(0.14)
0.15
(0.13)
0.01
0.64
Participant-based, mean (SD)
0.21
(0.21)
0.20
(0.16)
0.01
0.64
Number of discordant lists – count (%)
99
(97)
104
(97)
5
1.00
Treatment versus BPMH # medications
1,574
1,673
99
Proportion of list with discrepancies
Total discrepancies, mean (SD)
0.13
(0.12)
0.13
(0.13)
0.00
0.90
High and very high risk, mean (SD)
0.04
(0.07)
0.05
(0.07)
0.01
0.67
System-based discrepancies, mean (SD)
0.04
(0.06)
0.04
(0.06)
0.00
0.97
Participant-based discrepancies, mean (SD)
0.06
(0.09)
0.05
(0.07)
0.01
0.31
Number of discordant lists, count (%)
78
(76)
76
(71)
2
0.43
EHR versus BPMH # medications[a ]
1,717
1,783
66
Proportion of list with discrepancies
Total discrepancies, mean (SD)
0.43
(0.20)
0.39
(0.17)
0.04
0.12
High and very high risk
0.19
(0.14)
0.18
(0.14)
0.00
0.67
System-based
0.20
(0.17)
0.17
(0.14)
0.03
0.24
Participant-based
0.23
(0.20)
0.22
(0.17)
0.01
0.55
Discordant lists, count (%)
101
(99)
106
(99)
5
1.00
Abbreviations: APHID, Automated Patient History Intake Device; BPMH, best possible medication history; EHR, electronic health record; SD, standard deviation.
Note: Discrepancies are reported as the proportion of medications per list with an error. Statistics calculated using the participant as the unit of analysis. N = 209 participants; 3,500 medications.
a The BPMH identified 253 additional medications not on the EHR or in either treatment.
When comparing treatment arms to the BPMH (i.e., secondary outcome), we did not identify any differences in discrepancy rates ([Table 4 ]). The BPMH identified an additional 253 medications not recorded in the EHR or by either treatment arm. The BPMH included ∼13% (0.13 ± 0.13) more discrepancies than either treatment; 31 to 38% of those discrepancies (4–5% of all medications reviewed) were high risk. As a test for bias (i.e., an interaction between test and reference standard), we compared detection rates for the clinician-gathered medication history with the BPMH. We calculated a concordance rate of 98.6% (kappa = 0.97), arguing against any interaction between the test and the reference standard. When comparing the EHR lists to the BPMH, 207 of the 209 charts (99%) included one or more discrepancies.
[Table 5 ] lists the cross-tabulations for each treatment arm compared with the BPMH. Cross-tabulations only included medications present in the treatment list and the BPMH . New medications only identified on the BPMH were undefined for the treatment arms and could not be counted as true positives or false positives. The paper-based process had a sensitivity of 81% and APHID had a sensitivity of 83%. The paper-based process had a specificity of 94% and APHID had a specificity of 91%. Overall, we did not detect a statistically significant difference in the accuracy of either method for detecting discrepancies or high-risk discrepancies when compared with the BPMH.
Table 5
Discrepancy reporting accuracy for each treatment
Control (paper)
Intervention (APHID)
Discrepancy
(+)
No discrepancy
(–)
Discrepancy
(+)
No discrepancy
(–)
BPMH
Discrepancy
474
108
500
100
No discrepancy
56
936
94
979
530
594
Total = 1,574 medications
Total = 1,673 medications
Validity measure
Ratio (95% confidence interval)
Sensitivity
81 (78, 85)
83 (80, 86)
Specificity
94 (93, 96)
91 (89, 93)
Positive predictive value
89 (87, 92)
84 (81, 87)
Negative predictive value
90 (88, 91)
91 (89, 92)
Positive likelihood ratio[a ]
8.46 (6.60, 10.86)
5.32 (4.40, 6.42)
Negative likelihood ratio[a ]
0.12 (0.10, 0.14)
0.10 (0.08, 0.12)
Abbreviations: APHID, Automated Patient History Intake Device; BPMH, best possible medication history.
Note: An additional 253 medications were identified by the BPMH that were not identified in either treatment arm. Item-wise comparison cannot be calculated for medications not defined in both methods; they are not represented in the accuracy calculation. Percents rounded to nearest percentile. N = 3,247 medications.
a Likelihood ratios weighted by prevalence.
Discussion
Principal Findings
Discrepancy detection rates were similar between the paper-based and APHID processes for all dimensions of analysis. The addition of medication images did not affect history accuracy. Discrepancy rates, sensitivity statistics, and negative predictive values were virtually identical when compared with the EHR, even after adjusting for participant characteristics, discrepancy risk category, or root cause.
APHID offers an efficient and patient-centered method for collecting a medication history and documenting discrepancies. It compiled a list of medications from across the VA enterprise and helped identify nearly 90% of all discrepancies in our sample. APHID's detection rates compared favorably with best-practice interviews (i.e., ≥ 1 discrepancy in 99% of all EHR lists). In all, over a third of the prescriptions included a discrepancy, nearly half of which were high-risk. Approximately 15% of prescriptions with discrepancies were expired, indicating that asking about expired prescriptions identified errors of omission that might otherwise have gone undetected. This illustrates the value of using a patient-centered and standardized process to collect a medication history.[50 ]
[67 ]
[68 ]
[73 ]
[83 ]
Although the self-service history was more complete that the EHR list, the BPMH identified an additional high-risk discrepancy in nearly 5% of medications reviewed. This demonstrates the importance of clinician engagement. Self-service history collection techniques work in conjunction with—not in lieu of—a clinician-mediated medication history.[41 ]
[46 ]
[68 ]
[73 ]
[84 ]
Several possibilities may explain why our study detected more discrepancies than most other MR studies published in the last 5 years (34–88%).[25 ]
[27 ]
[37 ]
[46 ]
[47 ]
[72 ]
[85 ]
[86 ] First, Veterans tend to be older, more medically complex, and take more medications than the general population. Each of these factors has been shown to correlate with the incidence of medication errors.[74 ]
[76 ]
[87 ]
[88 ]
[89 ]
[90 ] Second, sequential history collection steps may improve patient recall. Studies suggest single histories may be less reliable, prone to drift, and overestimate compliance by up to 20%.[76 ]
[77 ]
[87 ]
[91 ]
[92 ]
[93 ]
[94 ] Third, studies show that data gathered from self-administered questionnaires are less affected by social desirability bias than interviews;[95 ] patients may be more likely to report nonadherence when independently correcting a medication list. Compared with pill counts, interviews have a reported sensitivity of 55 to 80% and a specificity of 70 to 87%.[77 ]
[93 ] Therefore, our self-reporting techniques may have been more effective than a typical interview. Fourth, our BPMH used a combination of supply chain metadata, local prescription lists, and patient furnished data.[25 ]
[27 ]
[96 ] Combining data sources increases the ability to document “ground truth”—an important consideration for reconciliation systems.[34 ]
[74 ]
[76 ]
[83 ]
[90 ]
This study underscores the importance of sociotechnical fit when implementing a reconciliation program. Clinician engagement, standardized processes, and patient-centered strategies may have greater influence upon the accuracy, quality, and overall success of MR than any specific technology.[85 ] Most MR methods—including paper—may be equifinal if informaticians optimize workflow and implementation climate (i.e., culture, leadership, and education). Nonetheless, facilities contending with staff shortages, high patient volumes, or large geographic areas may consider using patient-centered technologies such as kiosks, secure messaging, or online portals to improve efficiency and scalability without sacrificing effectiveness or accuracy.[27 ]
[29 ]
[97 ]
Study Strengths and Limitations
To the best of our knowledge, there are very few published studies comparing MR strategies and only four other randomized controlled trials (RCTs) conducted in the ambulatory setting, none of which adhere to the CONSORT/STARD guidelines.[19 ]
[21 ]
[47 ]
[74 ]
[85 ]
[86 ]
[96 ]
[98 ]
[99 ]
[100 ] Systematic reviews of ambulatory MR programs suggest more research is needed to prove the clinical impact of MR.[21 ]
[27 ]
[74 ]
[96 ]
[98 ]
[101 ] Our study makes an important contribution to the literature by providing accuracy statistics for multiple collection strategies.
This study has several limitations that may affect the validity of our findings. First, the high number of reported medications, older age, and unbalanced gender distribution of our sample likely affected the results. Our findings may not be generalizable to other settings and the prevalence of medication discrepancies may be lower in other populations.[43 ]
[102 ] Second, limitations in health information exchange and EHR interoperability prevented us from matching images to nonprescription and non-VA medications (∼27% of medications). This may have caused a type II error. Finally, to provide equal opportunity for discrepancy detection in each treatment arm, we compiled medication information from other VA facilities and generated longer medication lists than those accessible in our local EHR. This may have diluted our ability to identify a treatment effect.
Implications for Future Research and Policy Development
Several hypotheses may account for the apparent absence of a PSE (i.e., the phenomenon where pictures are more likely to be remembered than words). First, we could not match images to non-VA and nonprescription medications. Second, the prescriptions and images were shown on screen, one medication at a time. The similarity in physical appearance of drugs manufactured and distributed in the United States may hamper correct identification. Showing the complete list on one screen might have helped participants disambiguate similar appearing drugs. Third, there is mixed evidence to suggest that the strength of the PSE decays with age.[103 ] Finally, the use of computer technology may have caused an interference effect. It is critical that future consumer informatics research using pictures and visual displays isolate the design attributes that reliably promote attention, comprehension, and recall.
MR is a complex adaptive system that demands equally adaptive technologies to support practitioner workflow.[22 ]
[34 ]
[35 ]
[104 ]
[105 ]
[106 ] Commensurate effort should be applied to interface usability, data interoperability, and clinician decision support.[26 ]
[27 ]
[46 ]
[84 ]
[107 ]
[108 ]
[109 ] Federated health systems and managed care enterprises can promote these efforts by spearheading use of semantically rich medication terminologies such as Rx Norm. They can also help by enforcing EHR interoperability standards and funding regional health organizations.[35 ]
[110 ]
We believe our findings also emphasize the need to invest further in consumer informatics tools that engage patients and address U.S. EHR “meaningful use” standards.[111 ]
[112 ] Stage 2 of the Health Information Technology for Economic and Clinical Health (HITECH) Act includes MR as one of the 25 criteria for functionality and Stage 3 expands scope to incorporate patient-entered data. These expectations, while laudable, may further stress systems, inadvertently incentivizing organizations to implement less effective solutions such as interruptive reminders or boilerplate templates. Personal health records, mobile devices, secure messaging, and other consumer-driven technologies may provide time-sensitive alternatives to engage patients and collect information that might otherwise be skipped during a busy clinic visit. The effectiveness of these hi-tech solutions can be augmented by modest interventions focusing upon purposeful interface design, patient education best-practices, and provider interviewing strategies.[11 ]
[15 ]
[25 ]
[26 ]
[44 ]
[45 ]
[46 ]
[47 ]
[60 ]
[87 ]
[94 ]
[113 ]
[114 ]
[115 ]
[116 ]
[117 ]
[118 ]
[119 ]
[120 ] Finally, patient-centered assistive technologies like smart pillboxes, wearable devices, and Internet-enhanced living environments can improve the fidelity of our data streams.[121 ]
[122 ]
Conclusion
Our study suggests that gathering patient-generated data using EHR-based technologies or pen-and-paper processes can be equally effective in supporting MR. We believe the technology and workflow described herein offer a practical, safe, and scalable method to foster collaboration between patients and care teams. We have offered a strategy that combines EHR technology, business process reengineering, and patient-generated data to augment traditional history collection and improve patient engagement.[45 ]
[47 ]
[87 ]
[91 ]
[94 ]
[123 ] Future research should study how to promote MR technology adoption, improve patient self-reporting, and optimize use in specialty care settings.[124 ]
Clinical Relevance Statement
Clinical Relevance Statement
Consumer informatics technology, such as self-service kiosks, offers a workflow-compatible solution to collect an accurate medication history and satisfy the Stage 2 Meaningful Use criteria. This randomized controlled trial shows that the patient-facing medication reconciliation software, when thoughtfully implemented using a systems engineering approach, can incorporate patient-furnished data and substantially improve discrepancy detection as compared with usual care. A variety of data collection strategies may be equifinal and further usability research is needed to understand how to effectively use medication images in consumer-facing interfaces.
Multiple Choice Questions
Multiple Choice Questions
1. What do results of this randomized controlled trial indicate about self-service medication history collection software?
The inclusion of medication images significantly improves patient accuracy when compared with a reference standard
Kiosk technology does not perform as well as more traditional data collection methods such as distribution of paper-based questionnaires
Patient-facing self-service technologies tend to be less accurate than usual care clinician-conducted interviews
The accuracy of standardized methods for medication history collection including self-service kiosks and paper questionnaires are comparable when compared with a reference standard
Correct Answer: The correct answer is option d. This study did not demonstrate a statistically significant difference in the performance characteristics of the software as compared with a paper-based control that did not include images. Rather, discrepancy rates, sensitivity statistics, and negative predictive values were virtually identical when compared with the reference standard. It would not be completely accurate to say the technology was less effective than a clinician-mediated history since the software's diagnostic performance was at least comparable—if not superior to—that reported for clinician-conducted histories.
2. Which statement best describes the distribution of medication history errors reported in this study?
System-based sources of medication discrepancies are more common than patient nonadherence or recall errors
An estimated 35 to 40% of collected medication histories include one or more discrepancies
Approximately 15% of medication discrepancies are associated with expired prescriptions that the patient is still taking
Between 22 and 25% of all medication discrepancies were classified as high-risk by a panel of blinded clinician raters
Correct Answer: The correct answer is option c. Nearly 49% of discrepancies were associated with current prescriptions; 15% were associated with expired medications, and 37% were newly identified over-the-counter medications. Slightly more than half of the errors (an estimated 53%) had a patient-based root cause such as nonadherence. Note that 41% of all medications reviewed were associated with a discrepancy and 99% of medication histories included one or more discrepancies when compared with the reference standard. Over 45% of the discrepancies detected were rated as high or very high risk.
3. When implementing medication history collection software as part of an organizational medication reconciliation strategy, what factors are likely to improve performance?
The software is primarily effective in health systems where regional health information exchange is available
The software should include patient-facing affordances that improve patient reliability including medication images, plain language, and simple response controls
The implementation team should pay attention to the sociotechnical fit of the product including workflow compatibility and implementation climate
The software should be used primarily in clinical settings where resource constraints preclude collection of a more time-intensive clinician-mediated history
Correct Answer: The correct answer is option c. This study seems to suggest that a variety of data collection methods and technologies can produce a high-quality history. Clinician engagement, standardized process, and patient-centered models may have a greater influence upon the accuracy and success of MR than any specific approach. While the availability of regional data exchange could certainly improve the quality of the history, this study shows that even without a data exchange network, the information gathered may considerably improve upon usual care. Unfortunately, the study did not show that the patient-facing interface or the inclusion of images conferred any additional benefit beyond a simple medication list. The accuracy of each treatment arm was at least comparable to the published statistics associated with a clinician-mediated history. Since results of this study suggest that combining several data sources substantially improves accuracy, a standardized patient-driven history collection method should be combined with a clinician-mediated history.
Appendix Fig. A1 Medication discrepancy risk scoring tool.