Endoscopy 2023; 55(09): 857-858
DOI: 10.1055/a-2100-1294
Editorial

Competency assessment: a journey of lifelong learning

Referring to Khan R et al. p. 847–856
Arjun D. Koch
1   Department of Gastroenterology and Hepatology, Erasmus MC Cancer Institute – University Medical Center Rotterdam, the Netherlands
› Author Affiliations

Awareness of quality during training in endoscopy has increased tremendously over past decades and this is mainly attributable to the fact that we are moving away from minimum threshold numbers as surrogate markers of competence. Our focus has shifted to procedural competency, which is reached when a trainee can independently complete, successfully and repeatedly, all tasks that are required for a specific type of procedure. This is the point where certification can be obtained and independent practice can commence.

“If we document progression over time and repeatedly perform formative assessment, we literally see the incline of the trainees’ learning curves and thresholds being reached for key performance measures.”

This procedural competence entails more than just the technical performance; the cognitive and integrative skills that are required to select the right patient for the correct indication and decide on patient management are just as important [1]. The road toward procedural competence is the training phase, during which we, as trainers, have to make sure that our trainees are exposed to and master all these aspects within these endoscopic technical and nontechnical skills (ENTS). Validated assessment tools are needed to document this formative learning phase and again in the summative phase, which is commonly regarded as the point of certification. Assessment tools have been developed to help mentors assess trainees in a structured, more unified and objective manner, and are indispensable in our training curricula.

Some of us wonder whether we could use virtual reality simulators to assess competency in endoscopy; after all, a computer is highly objective and a certain performance should always generate the same assessment score and reduce rater bias. This has been studied in colonoscopy and esophagogastroduodenoscopy. There are no studies performed regarding endoscopic retrograde cholangiopancreatography (ERCP) simulator assessment. Unfortunately, it turns out that performance parameters derived from simulators do not correlate with scores given by blinded experts. It seems that our current simulators lack the discriminative power to assess performance and determine competence levels in patient-based endoscopy [2].

ERCP is among the most complex and challenging procedures in gastrointestinal endoscopy. ERCP has a high risk of complications and high quality performance is essential. Complication risks are inseparable from ERCP but risks tend to increase in patients who need it the least or when the indication seems questionable [3]. This once more stresses the importance of ENTS and its focus during training.

It is extremely important that we move away from threshold numbers and train our trainee endoscopists to a level where independent practice is justified. Having said that, a few questions remain.

1. What level of competency would indicate independent practice can commence? In the recent past, both the British Society of Gastroenterology and American Society for Gastrointestinal Endoscopy have recommended a common bile duct (CBD) cannulation success rate of 80 %–85 % after completion of ERCP training [4] [5]. A recent quality improvement initiative published by the European Society of Gastrointestinal Endoscopy (ESGE) states that a competent endoscopist should be able to successfully cannulate the CBD with native papillary anatomy in at least 90 % of cases [6]. If this is expected from an independent practising endoscopist, it makes sense that this should be a target for certification after training as well. It is remarkable, therefore, that the same ESGE has issued a position statement regarding ERCP training where a CBD cannulation rate of ≥ 80 % is upheld, increasing up to 90 % after a “mentored period of independent practice” [7]. The reason for this approach is debatable.

2. Do we have the means to assess and document competency development and procedural competence that marks the end point of training and starting point of independent practice? In this issue of Endoscopy, the systematic review on validity of our currently available ERCP assessment tools by Khan et al. [8] suggests that we do. Validity evidence supporting our ERCP assessment tools is essential because this proves that the tools we use actually document and measure what they were designed for. This systematic review shows that we have three assessment tools that demonstrate excellent validity evidence to support their use in formative assessment during ERCP training: the Bethesda ERCP Skills Assessment Tool (BESAT), ERCP Direct Observation of Procedural Skills Tool (ERCP DOPS), and The Endoscopic Ultrasound (EUS) and ERCP Skills Assessment Tool (TEESAT). The tools have not been validated for summative assessment and this raises the question of whether they can be used for certification or not? However, the answer to this question is not that simple. In a final exam scenario, a student gets a single chance to demonstrate his or her skills acquired over the entire training period. The ERCP case might be too easy or completely impossible. Both pose serious challenges for good assessment if there is no way to compensate for that. At the very least, the assessment tool should be as objective and reproducible as possible and rule out any rater bias. In this regard, I would argue that the ERCP DOPS is probably the most useful tool for summative assessment. The TEESAT seems to lack internal structure and the BESAT tool focuses too much on the technical skills side although its video-based assessment might be useful in summative assessment.

Another way of looking at this issue is to question whether summative assessment is really necessary. If we document progression over time and repeatedly perform formative assessment, we literally see the incline of the trainees’ learning curves and thresholds being reached for key performance measures. Then, in independent practice we can continue the same lifelong monitoring. After all, in many ways, competency assessment is a journey of lifelong learning.



Publication History

Article published online:
13 July 2023

© 2023. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany