Venous thromboembolism (VTE) is a serious complication following orthopaedic surgery.[1] However, VTE is largely preventable if patients receive appropriate thromboprophylaxis before major orthopaedic procedures. Anti-coagulation reduces the risk of VTE, but the protective effect must be weighed against the risk of bleeding. In addition, orthopaedic surgery covers a wide range of procedures with varying associated thrombotic and thromboembolic risks.
Knee arthroscopy is a frequently performed orthopaedic procedure, but concomitant thromboembolic prophylaxis is controversial. Arthroscopy is often done as short, outpatient procedures in a relatively young patient population. In fact, current guidelines suggest no thromboprophylaxis for patients undergoing arthroscopy.[1] This is supported by the randomized controlled trial ‘The Prevention of Thrombosis after Knee Arthroscopy Trial’ (POT-KAST), comparing thromboprophylaxis with low molecular weight heparin versus no treatment following knee arthroscopic in 1,451 patients[2]; no efficacy was found for thromboprophylaxis, as the risk of VTE was similar in the treated and untreated groups. However, risk prediction and tailored thromboprophylactic strategies for high-risk patients should be a topic for further research in patients undergoing knee arthroscopy. Accordingly, guidelines acknowledge that some high-risk patients may benefit from thromboprophylaxis—particularly those with prior VTE.[1] Hence, to optimize decision-making of anti-coagulant treatment, a plethora of epidemiological studies investigating predisposing factors, along with risk stratification schemes, have been published.[3]
[4] In this issue of the Journal, Nemeth et al aimed to identify high-risk arthroscopy patients by developing three different VTE risk prediction models, one of which is transformed into the L-TRiP(ascopy) score.[2]
A rigorous approach is necessary when developing and validating prediction models.[5] Important aspects are (increasing) accuracy of outcome predictions, minimizing risk of over-fitting and optimism in predictions and general applicability of the clinical prediction model. Some of these steps are factored into the development and validation of the three proposed scores by Nemeth et al. Of note, two distinct populations are used: one for model derivation (the ‘MEGA’ study) and one for model validation (the ‘THE VTE’ study). The authors assessed the internal validity using bootstrapping procedures, and subsequently examined the performance of the derived model using the ‘THE VTE’ data. Comparable c-statistics were obtained in the derivation and validation cohorts, indicating similar prediction performance in other cohort settings. Despite the acceptable c-statistic of 0.77 for the L-TRiP(ascopy) score, the data applied hold a clear limitation. In the derivation cohort, only 107 cases and 26 controls had an arthroscopy done, respectively. Where in the validation cohort, only 30 cases and 3 controls had the procedure performed. c-Statistics is an appropriate measure to compare discriminative abilities of different models, because it is independent of the prevalence of the outcome.[5] However, clinical utility of a prediction model is not captured by the c-statistics, but should instead be measured by (for example) the positive predictive value or negative predictive value. Such measures, on the other hand, are highly influenced by the prevalence of the outcome in a population. Consequently, accurate estimates of outcome prevalence are needed to examine clinical utility of a prediction model. Therefore, we agree with Nemeth et al that the optimal model cut-off point for predicting high-risk patients in need of thromboprophylaxis is lacking. Thus, the clinical utility of the L-TRiP(ascopy) score is currently unknown and warrants further investigation before being implemented into clinical practice.
Perspectives
Prediction models are becoming increasingly abundant in the medical literature,[6] and critical appraisal of already available models is a prerequisite before interpreting clinical usefulness. The most widely used VTE risk model for surgical patients is the modified Caprini risk assessment model, which is recommended by the American College of Chest Physicians.[7] Matching the Caprini score with the L-TRiP(ascopy) score reveals a considerable overlap in clinical characteristics. Thus, it seems appealing initially to validate the Caprini score in patients undergoing arthroscopy. Next apparent step would be to compare the two models in terms of accurate VTE prediction based on positive and negative predictive values or net re-classification index to asses clinical utility.
In the current era of the non-vitamin K antagonist oral anticoagulants, much effort has been put into identifying risk factors for incident and recurrent VTE.[8]
[9]
[10]
[11]
[12] This is appealing because of a potential shift towards lower bleeding risks with the new agents. However, in our academic search for high-risk sub-groups with subsequent numerous prediction models, we might end up losing our key audience on the floor—namely, the treating clinicians. Justifiably, it is appealing to accept the challenge in guidelines and continue the detailed search for more sub-groups being at high risk of VTE that would benefit from treatment. However, when developing new prediction models the goal must be to strive for scores completed for practicality and everyday clinical use, instead of merely adding to the existing heap.