Methods Inf Med 2004; 43(02): 163-170
DOI: 10.1055/s-0038-1633855
Original Article
Schattauer GmbH

Quality Assessments of HMO Diagnosis Databases Used to Monitor Childhood Vaccine Safety

J. Mullooly
1   Center for Health Research, Kaiser Permanente Northwest, Portland, Oregon, USA
,
L. Drew
1   Center for Health Research, Kaiser Permanente Northwest, Portland, Oregon, USA
,
F. DeStefano
2   Centers for Disease Control and Prevention, Atlanta, Georgia, USA
,
J. Maher
3   Oregon Department of Human Services, Portland, Oregon, USA
,
K. Bohlke
4   Group Health Cooperative, Seattle, Washington, USA
,
V. Immanuel
4   Group Health Cooperative, Seattle, Washington, USA
,
S. Black
5   Kaiser Permanente Southern California, Downey, California, USA
,
E. Lewis
5   Kaiser Permanente Southern California, Downey, California, USA
,
P. Ray
5   Kaiser Permanente Southern California, Downey, California, USA
,
C. Vadheim
6   Kaiser Permanente Northern California, Fresno, California, USA
,
M. Lugg
6   Kaiser Permanente Northern California, Fresno, California, USA
,
R. Chen
2   Centers for Disease Control and Prevention, Atlanta, Georgia, USA
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
05. Februar 2018 (online)

Preview

Summary

Objective: To assess the quality of automated diagnoses extracted from medical care databases by the Vaccine Safety Datalink (VSD) study.

Methods: Two methods are used to assess quality of VSD diagnosis data. The first method compares common automated and abstracted diagnostic categories (“outcomes”) in 1-2% simple random samples of study populations. The second method estimates positive predictive values of automated diagnosis codes used to identify potential cases of rare conditions (e.g., acute ataxia) for inclusion in nested case-control medical record abstraction studies.

Results: There was good agreement (64-68%) between automated and abstracted outcomes in the 1-2% simple random samples at 3 of the 4 VSD sites and poor agreement (44%) at 1 site. Overall at 3 sites, 56% of children with automated cerebella ataxia codes (ICD-9 = 334) and 22% with “lack of coordination” codes (ICD-9 = 781.3) met objective clinical criteria for acute ataxia.

Conclusions: The misclassification error rates for automated screening outcomes substantially reduce the power of screening analyses and limit usefulness of screening analyses to moderate to strong vaccine-outcome associations. Medical record verification of outcomes is needed for definitive assessments.