Abstract
Objective The present study aims to analyze the intra- and interobserver reproducibility of
the Lauge-Hansen, Danis-Weber, and Arbeitsgemeinschaft für Osteosynthesefragen (AO)
classifications for ankle fractures, and the influence of evaluators training stage
in these assessments.
Methods Anteroposterior (AP), lateral and true AP radiographs from 30 patients with ankle
fractures were selected. All images were evaluated by 11 evaluators at different stages
of professional training (5 residents and 6 orthopedic surgeons), at 2 different times.
Intra- and interobserver agreement was analyzed using the weighted Kappa coefficient.
Student t-tests for paired samples were applied to detect significant differences
in the degree of interobserver agreement between instruments.
Results Intraobserver analysis alone had a significant agreement in all classifications.
Moderate to excellent interobserver agreement was highly significant (p ≤ 0.0001) for the Danis-Weber classification. The Danis-Weber classification showed,
on average, a significantly higher degree of agreement than the remaining classification
systems (p ≤ 0.0001).
Conclusion The Danis-Weber classification presented the highest reproducibility among instruments
and the evaluator's little experience had no negative influence on the reproducibility
of ankle fracture classifications. Level of Evidence II, Diagnostic Studies – Investigating a Diagnostic Test.
Keywords
ankle fractures - classification - reproducibility of results