Thorac Cardiovasc Surg 2026; 74(02): 081-082
DOI: 10.1055/a-2782-7270
Editorial

Comparing Apples and Oranges

Authors

  • Andreas Böning

    1   Department of Cardiovascular Surgery, Justus Liebig University, Giessen, Germany

In Germany, it is often said that not apples and oranges, but apples and pears are incomparable. The phrase is deceptively simple, yet it captures a recurrent challenge in contemporary thoracic and cardiovascular surgery: The tendency to compare entities that appear similar on the surface but differ fundamentally in their characteristics.

This challenge is increasingly evident in how evidence is generated, interpreted, and translated into clinical decision-making. Surgical techniques, devices, and treatment strategies are frequently juxtaposed as if they were interchangeable commodities. However, as with apples and pears, similarity does not equate to equivalence. Differences in anatomy, pathology, patient selection, surgeon expertise, institutional infrastructure, and perioperative management profoundly influence outcomes. Ignoring these distinctions risks drawing conclusions that are, at best, incomplete and, at worst, misleading.

Registry-based analyses compete with prospective-randomized studies. In registries, large datasets provide statistical power and real-world insight, yet they are inherently shaped by selection bias and unmeasured confounding. When registry outcomes are contrasted with those from randomized controlled trials—or when registries from different health care systems are compared—the temptation to declare superiority can be strong. As Gaudino and Borger[1] emphasize, “only randomized trials should be used” when assessing true comparative efficacy, and Doenst et al[2] have highlighted the inherent limitations of comparing randomized versus registry evidence.

From the perspective of the Editor-in-Chief, the responsibility for editors, reviewers, and readers alike is clear. Comparisons must be framed transparently, with explicit acknowledgment of their limitations. Should methodological rigor be matched by conceptual clarity: What exactly is being compared, and why? Are the underlying assumptions justified, or are we forcing equivalence where none exists?

In this issue, two original investigations exemplify this challenge: “Is total arterial grafting superior to multiarterial grafting in coronary bypass?” by Leviner et al,[3] and “Postoperative results of patients undergoing minimally invasive tricuspid valve procedure” by Klocksin et al.[4] Both manuscripts address clinically relevant questions and analyze retrospectively sampled data to compare surgical strategies that, while related, are not identical in indication, execution, or patient selection.

As the Editor-in-Chief, I wish to be transparent that each of these papers received a reject recommendation from one of the reviewers, reflecting legitimate concerns regarding comparability, residual confounding, and the risk of overinterpretation. Nevertheless, I elected to retain both manuscripts for publication—explicitly in conjunction with this editorial—to emphasize an important didactic point. These studies should not be read as definitive answers to questions of superiority. Rather, they should be interpreted as hypothesis-generating contributions that illustrate both the value and the inherent limitations of retrospective comparisons.

I, therefore, encourage readers to approach the presented analyses with appropriate skepticism. Differences observed between groups may reflect selection effects, institutional preferences, or unmeasured variables rather than true causal effects of the surgical technique itself. Awareness of these constraints does not diminish the scientific merit of such work; instead, it places the findings in their proper context and guards against unwarranted generalization.

Recognizing when we are comparing apples with pears—and resisting the urge to declare one categorically superior to the other—is a mark of scientific maturity. By embracing complexity and respecting difference, we can ensure that comparisons illuminate rather than obscure, and that evidence serves its intended purpose: Improving outcomes for our patients and advancing the discipline.



Publication History

Article published online:
12 February 2026

© 2026. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany