CC BY-NC-ND 4.0 · Rev Bras Ortop (Sao Paulo) 2023; 58(05): e790-e797
DOI: 10.1055/s-0043-1771003
Artigo Original
Ombro e Cotovelo

Construct Validity and Experience of Using a Low-cost Arthroscopic Shoulder Surgery Simulator

Article in several languages: português | English
1   Chefe do Grupo de Cirurgia de Ombro e Cotovelo, Hospital Universitário Evangélico Mackenzie (HUEM), Curitiba, PR, Brasil
,
Paula Adamo Almeida
2   Acadêmica do Curso de Medicina da Universidade Federal do Paraná, Curitiba, Paraná, Brasil
,
Alynson Larocca Kulcheski
3   Mestrando do curso de pós graduação, Universidade Federal do Paraná, Curitiba, Paraná, Brasil
,
Paul Andre Milcent
3   Mestrando do curso de pós graduação, Universidade Federal do Paraná, Curitiba, Paraná, Brasil
,
Edmar Stieven Filho
4   Professor do Departamento de Cirurgia, Universidade Federal do Paraná, Curitiba, Paraná, Brasil
› Author Affiliations
Financial Support No source of funding that could influence results was received.
 

Abstract

Objective To validate the low-cost model for arthroscopy training and analyze the acceptance and usefulness of the developed simulator in medical teaching and training.

Method Ten medical students, ten third-year orthopedic residents, and ten shoulder surgeons performed predetermined tasks on a shoulder simulator twice. The parameters used were time to complete the tasks, number of looks at the hands, GOALS score (Global Operative Assessment of Laparoscopic Skills) and comparison between groups and within groups. An adapted Likert scale was applied addressing the individuals' impressions about the simulator and its applicability.

Results In the intergroup comparison, the shoulder surgeons had better scores and times than the other groups. When the tasks were repeated, the group of surgeons had a 59% improvement in time (p < 0.05), as did the group of medical students. In the GOALS score, shoulder surgeons had consistently better scores than the other groups. And when we evaluated the evolution from the first to the second test, the group of surgeons and the group of academics had a statistically significant improvement (p < 0.05). In terms of lookdowns, there was a decrease in all groups. There was consensus that the simulator is useful in training.

Conclusion The simulator developed allowed the differentiation between individuals with different levels of training in arthroscopic surgery. It was accepted by 100% of the participants as a useful tool in arthroscopic shoulder surgical training.


#

Introduction

Teaching residents in the operating room is didactic, but it can increase the cost, morbidity and mortality of patients.[1] [2] [3] [4] [5] Scott e Dunnington[6] in a review of the surgical curriculum in the US, recommended in their article “Move the Learning Curve out of the Operating Room”, that surgical training should become more efficient by relying on simulations, learning feedback and objective ways of assessing skill gains.

Developing arthroscopic skills can be particularly difficult for some surgeons.[7] The simulator provides unlimited opportunities for training, but with a cost that can exceed 80 thousand dollars, making it unfeasible for several educational institutions.[4] [8]

Dry models can be easy to build, inexpensive, and arouse interest in trainees and demonstrate similar efficiency to virtual reality,[8] [9] [10] [11] [12] [13] the model studied here was developed with this format and concept of using low-cost materials. The step-by-step material and assembly of the model has already been published,[14] and the present study proposes the validation of this model ([Fig. 1]).

Zoom Image
Fig. 1 (a) Fixation of the rotator cuff simulation tape, (b) Demonstration of the positions of the structures, (c) Positioning of the glenoid reference points. Source: Author (2021).

#

Material and Method

Cross-sectional experimental study approved by the Research Ethics Committee of the Hospital do Trabalhador/SESA/PR with number 1,994,655.

The project consists of validating the shoulder arthroscopy model using the construct methodology, comparing groups with different levels of training (surgeons, residents and medical students). The validation construct method is focused on verifying whether the model demonstrates the difference in dexterity and speed in performing different standardized activities, and evaluating whether there is an improvement in scores and speed with the repetition of the proposed exercises.

In this study, a total of 30 individuals divided into the following groups were used: Ten sixth-year medical students at the Federal University of Paraná (drawn by registration number and invited to participate). Ten third-year orthopedic residents and ten shoulder surgeons from the Hospital de Clínicas / Hospital do Trabalhador (not randomized as they were the total universe)

All invited individuals signed the Free, Prior and Informed Consent and, regardless of the degree of training, were instructed in the operation of the model with a video of about three minutes.

All tests were filmed and analyzed by the authors.

The arthroscope was inserted through a classic viewing port, and a standard port through the pre-made rotator interval and a probe was placed. The individual was instructed to touch points marked on the joint with numbers sequentially.

The second activity was to use the probe to engage in the hole of the elastic mounted there and pull until the line drawn on the elastic coincides with the edge of the acromion of the model ([Fig. 2]).

Zoom Image
Fig. 2 Demonstration of model use (a) Ready-made shoulder arthroscopy training model; (b) model in use with arthroscope; (c) triangulation exercise with probe in lateral decubitus; (d) tissue manipulation exercise by elastic traction (supraspinal) in a beach chair. Source: Author (2021).

The procedure was reproduced twice by each individual, with 600 seconds (10 minutes) being the time limit to complete each test. After completion, all participants were asked to complete a Likert questionnaire.

The analyzed criteria were time to complete the tasks, count of the number of times of looks down, comparison of the GOALS score. All parameters were evaluated in both tests, both inter and intra groups. Time measurements were performed in seconds, and the parameters according to the GOALS score were developed with questions that assign grades 1, 3 and 5 for each performance item, with five being the maximum score and one the minimum.[15] [16]

GOALS score adaptation

Item 1 - Depth perception

1. Constantly misses target, moves too wide, takes time to correct

3. A little exaggerated movement or loss of target, quick to correct

5. Positions instruments in the correct plane to hit the target

We use triangulation time. Up to nine seconds five points, from ten to twenty seconds three points and over twenty seconds one point.


#

Item 2 - Bimanual dexterity

1. Uses only one hand, ignores non-dominant hand, poor coordination between hands

3. Uses both hands, but does not optimize interaction between them

5. Uses both hands in a complementary way, in order to optimize the activity

We used the time delay to introduce the probe into the hole. Up to nine seconds five points, from ten to twenty seconds three points and more than twenty seconds one point.


#

Item 3 - Efficiency

1. Inefficient efforts: too many movement attempts; constantly shifting focus or persisting with no progress

3. Slow, but planned movements are reasonably organized

5. Confident, efficient and safe; stays focused on the task until it is resolved

We used the number of attempts until the probe was properly positioned in the hole to pull the elastic. One attempt equals 5 points, two to five attempts three points, more than five attempts one point.


#

Item 4 - Tissue manipulation

1. Rough movements, tears tissue, damages adjacent tissue, poor grasper control, grasper often releases tissue

3. Handles tissue reasonably, little trauma to adjacent tissues

5. Manipulates tissues well, applies appropriate traction, minimal injury to adjacent tissues

We used the time to pull the cuff tape. Up to five seconds five points, from six to ten seconds three points and more than ten seconds one point.


#

Item 5 - Autonomy

1. Unable to complete task, even with verbal guidance

3. Able to complete task with moderate guidance

5.Able to complete the task without guidance

We used the quantity of guidelines. Five points if no orientation, three points if completed with orientations and one point if not completed.

At the end of the tests, the participants were asked to complete a Likert Scale (modified for this study).

All statistical tests were performed using the free R studios® program.

For the comparison between the values of the first and second attempts, the Wilcoxon test was used. In the paired comparison between groups, the Mann-Whitney test was applied, and the Kruskal-Wallis test was used between the three groups.


#
#
#

Results

Of the group of students, four were male and six were female, with an average age of 23.5 years. Of the residents, nine were male and 1 was female, with an average age of 29.3 years. In surgeons, all were males with a mean age of 36.1 years.

In the intergroup comparison, the average time for the first test, in the group of surgeons, was 102.59 seconds, against 221 seconds in the group of residents and 265 seconds in the group of students, demonstrating a statistical difference. Just like when surgeons and residents, surgeons and students were paired, but not between residents and students. In the second test, there was an average difference of 59 seconds in the group of surgeons, 86 seconds in the group of residents and 146 seconds among students, and again no statistical difference was found only in the comparison between the group of residents and students ([Figs. 3] e [4]).

Zoom Image
Fig. 3 Intergroup comparison, time in the first test. Source: The Author (2021). Comparação de tempos teste 1 = Time comparison, Test 1, Segundos = Seconds, Cirurgiões = Surgeons, Residentes = Residents, Acadêmicos = Medical students, Testes = Tests.
Zoom Image
Fig. 4 Intergroup comparison, time in the second test. Source: The Author (2021). Comparação de tempos teste 2 = Time comparison, Test 2, Segundos = Seconds, Cirurgiões = Surgeons, Residente = Residents, Acadêmicos = Medical students, Testes = Tests.

In the intragroup comparison, between the first and second tests, the surgeons group showed a statistically significant mean difference of 102.59 seconds to 59 seconds. In the group of residents, the average time decreased from 265.9 to 184.7, but with no statistical difference (p = 0.08). In the group of students, the decrease in time was from 376.5 in the first test to 146 in the second test (p = 0.0039).

For the comparison between the three groups, a significant difference was observed in the first (p = 0.00037) and in the second test (p = 0.0048).

The GOALS score in the group of surgeons showed a mean increase from 20.2 to 22.4 from the first to the second test (p = 0.05); in the group of residents, it increased from 13.4 to 15.8 (p = 0.16); and the group of students from 9.4 to 15.6 (p = 0.009).

Comparison of GOALS scores in the first test between surgeons and residents, surgeons and students, and residents and students showed that there was a statistical difference (p = 0.0035, p = 0.0002, p = 0.012, respectively). In the second test, the difference between surgeons and residents and surgeons and students was maintained (p = 0.011, p = 0.0045). However, no difference was observed between the group of residents and students (p = 0.73) ([Fig. 5]).

Zoom Image
Fig. 5 GOALS score variation between the first and second tests. Source: The Author (2021). GOALS primeiro e segundo testes = GOALS first and second tests, GOALS = GOALS, Cirurgiões T1 = Surgeons T1, Cirurgiões T2 = Surgeons T2, Residente T1 = Residents T1, Residente T2 = Residents T2, Acadêmicos T1 = Students T1, Acadêmicos T2 = Students T2, Testes = Tests.

The surgeons group showed a mean decrease in the number of lookdowns from 2.6 to 1.2 from the first to the second test, respectively (p = 0.29). In the group of residents, we observed an average decrease of 10 lookdowns in the first test to 4.2 in the second test (p = 0.05), and in the group of students, from 8.6 to 3.6 when comparing the first and second attempts (p = 0.009).

The response to the Likert scale was that the simulator was a useful item both in training surgeons and an item that would be useful and could replace virtual simulators. The simulator just wasn't well accepted as a suitable substitute for training on cadavers ([Table 1]).

Table 1

Question

Strongly disagree

Disagree

Neutral

Agree

Strongly agree

%

%

%

%

%

Surgeons

1. Is the simulator useful for training beginner surgeons in the area of arthroscopy?

10

90

2. Is simulator training a motivating/enjoyable activity?

20

80

3. Can the low-cost simulator replace a virtual simulator?

20

30

50

4. Can the implementation of simulator training in the medical residency program improve arthroscopy training?

100

5. Can the low-cost simulator replace cadaver training?

30

20

40

10

Residents

1. Is the simulator useful for training beginner surgeons in the area of arthroscopy?

100

2. Is simulator training a motivating/enjoyable activity?

10

10

80

3. Can the low-cost simulator replace a virtual simulator?

40

60

4. Can the implementation of simulator training in the medical residency program improve arthroscopy training?

100

5. Can the low-cost simulator replace cadaver training?

10

20

30

30

10

Medical Students

1. Is the simulator useful for training beginner surgeons in the area of arthroscopy?

30

70

2. Is simulator training a motivating/enjoyable activity?

10

30

60

3. Can the low-cost simulator replace a virtual simulator?

30

60

10

4. Can the implementation of simulator training in the medical residency program improve arthroscopy training?

20

80

5. Can the low-cost simulator replace cadaver training?

20

40

30

10


#

Discussion

To validate a surgical simulation device, one of the main methods is proficiency differentiation, that is, if the same model is tested by groups of individuals with different learning levels, the performance must be different. In this method, the model must demonstrate difference between groups of different skill levels as well as evolution in skills with the repetition of tasks.[17] [18] [19] [20] [21] [22] [23] In our comparison between the three groups, both the first and second tests showed difference in parameters, a distinction that was maintained in the paired comparison of groups, except between residents and students. In another study, an experiment using a laparoscopic model with a cardboard box and top-mounted tablet demonstrated that the group of surgeons was consistently faster than the group of senior and junior residents, results consistent with ours.[24]

When we compared the performance of residents and students in our study, no difference was observed. We noticed, however, that, unlike the group of surgeons, the group of residents showed great variation in the parameters studied, including two outliers with much longer time, a fact that may be related to the fact that training is not uniform in this group.

To determine if the model provides skill improvement, performance should improve with training.[19] [22] [25] [26] [27] In the repetition of tasks, the group of surgeons and students had a significant decrease in time. Although the group of residents was able to decrease about 70% of the time, there was no statistical improvement (p = 0.08). However, one participant increased the time by four times, we characterized this individual as an outlier and removing this result resulted in no difference. This performance improvement demonstrates that the simulator may have the ability to improve arthroscopic skills.

In a validation study similar to ours, but with a box design, surgeons, residents and students performing procedures six times were evaluated and progress over time was analyzed. Residents and students were, respectively, 56% and 127% slower than surgeons to complete the proposed tasks, and maintained this difference until the last test, corroborating the present study.[22]

When analyzing the evolution of time to perform the tasks, the group of surgeons had an improvement of 44%, residents 39% and students 45% being significant for all groups, findings consistent with ours, but which differed in the group of residents, fact that may have occurred due to the irregular level of training among our residents, as already mentioned.[22]

For a more objective evaluation of the results, it was decided to use the GOALS score, which, despite having been created to evaluate laparoscopic surgeries,[18] had already been used in the evaluation of a shoulder arthroscopy simulator and a training model in flavectomy endoscopic.[15] [28] To be as objective as possible, we created a scale of time or number of attempts to perform specific tasks and correlated it with each item of the GOALS score and in the intergroup comparison we observed differences between all groups in the first test. The same results were observed in the second test comparing surgeons and residents, but not when comparing residents and students.

The aforementioned endoscopic flavectomy study, comparing surgeons and students with the GOALS score, showed differences between groups, corroborating the findings of this study.[28] In a knee model, similar to ours and using the ASSET score, students and surgeons were compared and showed statistical difference, again confirming the hypothesis that the construct allows the differentiation between individuals with different levels of experience.[13] For the shoulder, we found a single study that used the GOALS score that evaluated first-year medical students and their evolution with the use of the device, and the authors showed significant improvement, as in our study.[15]

Another objective visual parameter adopted was the number of lookdowns.[29] [30] In the group of surgeons, there was only a small difference of 2.6 to 1.2 (p = 0.29) in the evolution from the first to the second test, which can be explained by the fact that the subjects were already used to performing arthroscopic surgeries, different from the groups of residents, which showed an average decrease from 10 lookdowns in the first test to 4.2 in the second (p = 0.05) and in the group of students, from 8.6 to 3.6 (p = 0.009) . When validating a knee simulator, the authors found an average of 47 lookdowns in the group of students, against 16.9 in the group of surgeons, a higher proportion and difference in relation to the present study. As it is a similar proposal, the discrepancy in the observations can be explained by the fact that there is only one test per individual, with no chance of learning in the group of students and the more complex procedure of meniscectomy, which may justify the greater number of looks from surgeons.[30]

A fundamental point for a simulator to work well is the level of acceptance by those who will use it.[31] We used the Likert scale, and the participants were unanimous in stating that the simulator is useful in training surgeons and also that it was a pleasant activity. Similar results were obtained in the study of flavectomy and in the model of .[28] [30] Evaluating the simulator box for arthroscopy, the authors observed that 90% of the inexperienced participants agreed. However, in the group, only 58% of the individuals found it valid. The model used by these authors was not an anatomical one, but a box with holes, and the tasks were not correlated with surgeries. Thus, despite improving hand-eye coordination for activities without direct vision, it probably did not convey the feeling of being with a real patient.[21]

The item with the greatest disagreement was whether the simulator could replace cadaver training, with 30% of disagreement between surgeons and residents, while 20% of students disagreed, corresponding to the findings of other authors.[28] [30] The cadaver remains the gold standard for simulation, providing identical anatomy, similar tactile sensation, limited only by the lack of bleeding and active muscle contraction.

We agree with McDougal[19] who says that surgical simulation will not replace the need for usual curricular learning, with tutors and practical experience, but that it should allow obtaining basic skills, leaving interaction with patients to improve these skills.

The present study has limitations, the number of trained surgeons and residents was limited by the number of individuals available at the institution. Validity was not compared with another type of simulator already established, and we did not assess whether the acquired skills can be transposed into a real surgical situation. The simulator is designed as similar to a shoulder as possible, however, the lack of soft tissue and bleeding makes it less reliable. In the future, three-dimensional printing using materials with different textures may be used to better reproduce a real surgical environment.


#

Conclusions

This study concluded that the simulator developed allowed the differentiation between individuals with different levels of training in arthroscopic surgery. It allowed participants to improve their individual skills as they repeated the proposed tasks. All participants considered the simulator a useful tool in arthroscopic shoulder surgery training.


#
#

Work developed at Hospital de Clinicas, Federal University of Paraná, Curitiba, Paraná, Brazil.


  • Referências

  • 1 Bridges M, Diamond DL. The financial impact of teaching surgical residents in the operating room. Am J Surg 1999; 177 (01) 28-32
  • 2 Scott DJ, Bergen PC, Rege RV. et al. Laparoscopic training on bench models: better and more cost effective than operating room experience?. J Am Coll Surg 2000; 191 (03) 272-283
  • 3 Vincent C, Moorthy K, Sarker SK, Chang A, Darzi AW. Systems approaches to surgical quality and safety: from concept to measurement. Ann Surg 2004; 239 (04) 475-482
  • 4 Fried GM, Feldman LS, Vassiliou MC. et al. Proving the value of simulation in laparoscopic surgery. Ann Surg 2004; 240 (03) 518-525 , discussion 525–528
  • 5 Canbeyli İD, Çırpar M, Oktaş B, Keskinkılıç Sİ. Comparison of bench-top simulation versus traditional training models in diagnostic arthroscopic skills training. Eklem Hastalik Cerrahisi 2018; 29 (03) 130-138
  • 6 Scott DJ, Dunnington GL. The new ACS/APDS Skills Curriculum: moving the learning curve out of the operating room. J Gastrointest Surg 2008; 12 (02) 213-221
  • 7 Pedowitz RA, Esch J, Snyder S. Evaluation of a virtual reality simulator for arthroscopy skills development. Arthroscopy 2002; 18 (06) E29
  • 8 Aslam A, Nason GJ, Giri SK. Homemade laparoscopic surgical simulator: a cost-effective solution to the challenge of acquiring laparoscopic skills?. Ir J Med Sci 2016; 185 (04) 791-796
  • 9 Anastakis DJ, Regehr G, Reznick RK. et al. Assessment of technical skills transfer from the bench training model to the human model. Am J Surg 1999; 177 (02) 167-170
  • 10 Arealis G, Holton J, Rodrigues JB. et al. How to Build Your Simple and Cost-effective Arthroscopic Skills Simulator. Arthrosc Tech 2016; 5 (05) e1039-e1047
  • 11 Travassos TDC, Schneider-Monteiro ED, Santos AMD, Reis LO. Homemade laparoscopic simulator. Acta Cir Bras 2019; 34 (10) e201901006
  • 12 Nunes CP, Kulcheski AL, Almeida PA, Stieven Filho E, Graells XS. Creation of a lo-cost endoscopic flavectomy training model. Coluna/Columna 2020; 19 (03) 223-227
  • 13 Milcent PAA, Coelho ARR, Rosa SP. et al. Um simulador de artroscopia de joelho acessível. Rev Bras Educ Med 2020; 44 (01) e038
  • 14 Dau L, Almeida PA, Milcent PAA, Rosa FM, Kulcheski AL, Stieven Filho E. Artroscopia do ombro – Criação de um modelo de treinamento acessível. Rev Bras Ortop 2022; 57 (04) 702-708
  • 15 Henn III RF, Shah N, Warner JJ, Gomoll AH. Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model. Arthroscopy 2013; 29 (06) 982-985
  • 16 Vassiliou MC, Feldman LS, Andrew CG. et al. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 2005; 190 (01) 107-113
  • 17 Atesok K, Mabrey JD, Jazrawi LM, Egol KA. Surgical simulation in orthopaedic skills training. J Am Acad Orthop Surg 2012; 20 (07) 410-422
  • 18 Carter FJ, Schijven MP, Aggarwal R. et al; Work Group for Evaluation and Implementation of Simulators and Skills Training Programmes. Consensus guidelines for validation of virtual reality surgical simulators. Surg Endosc 2005; 19 (12) 1523-1532
  • 19 McDougall EM. Validation of surgical simulators. J Endourol 2007; 21 (03) 244-247
  • 20 Braman JP, Sweet RM, Hananel DM, Ludewig PM, Van Heest AE. Development and validation of a basic arthroscopy skills simulator. Arthroscopy 2015; 31 (01) 104-112
  • 21 Bouaicha S, Jentzsch T, Scheurer F, Rahm S. Validation of an Arthroscopic Training Device. Arthroscopy 2017; 33 (03) 651-658.e1
  • 22 Colaco HB, Hughes K, Pearse E, Arnander M, Tennent D. Construct Validity, Assessment of the Learning Curve, and Experience of Using a Low-Cost Arthroscopic Surgical Simulator. J Surg Educ 2017; 74 (01) 47-54
  • 23 Lopez G, Martin DF, Wright R. et al. Construct Validity for a Cost-effective Arthroscopic Surgery Simulator for Resident Education. J Am Acad Orthop Surg 2016; 24 (12) 886-894
  • 24 Ruparel RK, Brahmbhatt RD, Dove JC. et al. “iTrainers”–novel and inexpensive alternatives to traditional laparoscopic box trainers. Urology 2014; 83 (01) 116-120
  • 25 Rosenthal R, Gantert WA, Scheidegger D, Oertli D. Can skills assessment on a virtual reality trainer predict a surgical trainee's talent in laparoscopic surgery?. Surg Endosc 2006; 20 (08) 1286-1290
  • 26 Gomoll AH, O'Toole RV, Czarnecki J, Warner JJ. Surgical experience correlates with performance on a virtual reality simulator for shoulder arthroscopy. Am J Sports Med 2007; 35 (06) 883-888
  • 27 Dal Molin FF, Mothes FC, Feder MG. Effectiveness of the videoarthroscopy learning process in synthetic shoulder models. Rev Bras Ortop 2015; 47 (01) 83-91
  • 28 Kulcheski ÁL, Stieven-Filho E, Nunes CP, Milcent PAA, Dau L, I-Graells XS. Validation of an endoscopic flavectomy training model. Rev Col Bras Cir 2021; 48: e202027910
  • 29 Alvand A, Khan T, Al-Ali S, Jackson WF, Price AJ, Rees JL. Simple visual parameters for objective assessment of arthroscopic skill. J Bone Joint Surg Am 2012; 94 (13) e97
  • 30 Milcent PAA, Kulcheski AL, Rosa FM, Dau L, Stieven Filho E. Construct validity and experience of using a low-cost arthroscopic knee surgery simulator. J Surg Educ 2021; 78 (01) 292-301
  • 31 Tuijthof GJ, Visser P, Sierevelt IN, Van Dijk CN, Kerkhoffs GM. Does perception of usefulness of arthroscopic simulators differ with levels of experience?. Clin Orthop Relat Res 2011; 469 (06) 1701-1708

Endereço para correspondência

Leonardo Dau
Rua Padre Jose Kentenich 900–Casa 02, Curitiba, Paraná
Brazil   

Publication History

Received: 27 May 2022

Accepted: 27 October 2022

Article published online:
30 October 2023

© 2023. Sociedade Brasileira de Ortopedia e Traumatologia. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Revinter Publicações Ltda.
Rua do Matoso 170, Rio de Janeiro, RJ, CEP 20270-135, Brazil

  • Referências

  • 1 Bridges M, Diamond DL. The financial impact of teaching surgical residents in the operating room. Am J Surg 1999; 177 (01) 28-32
  • 2 Scott DJ, Bergen PC, Rege RV. et al. Laparoscopic training on bench models: better and more cost effective than operating room experience?. J Am Coll Surg 2000; 191 (03) 272-283
  • 3 Vincent C, Moorthy K, Sarker SK, Chang A, Darzi AW. Systems approaches to surgical quality and safety: from concept to measurement. Ann Surg 2004; 239 (04) 475-482
  • 4 Fried GM, Feldman LS, Vassiliou MC. et al. Proving the value of simulation in laparoscopic surgery. Ann Surg 2004; 240 (03) 518-525 , discussion 525–528
  • 5 Canbeyli İD, Çırpar M, Oktaş B, Keskinkılıç Sİ. Comparison of bench-top simulation versus traditional training models in diagnostic arthroscopic skills training. Eklem Hastalik Cerrahisi 2018; 29 (03) 130-138
  • 6 Scott DJ, Dunnington GL. The new ACS/APDS Skills Curriculum: moving the learning curve out of the operating room. J Gastrointest Surg 2008; 12 (02) 213-221
  • 7 Pedowitz RA, Esch J, Snyder S. Evaluation of a virtual reality simulator for arthroscopy skills development. Arthroscopy 2002; 18 (06) E29
  • 8 Aslam A, Nason GJ, Giri SK. Homemade laparoscopic surgical simulator: a cost-effective solution to the challenge of acquiring laparoscopic skills?. Ir J Med Sci 2016; 185 (04) 791-796
  • 9 Anastakis DJ, Regehr G, Reznick RK. et al. Assessment of technical skills transfer from the bench training model to the human model. Am J Surg 1999; 177 (02) 167-170
  • 10 Arealis G, Holton J, Rodrigues JB. et al. How to Build Your Simple and Cost-effective Arthroscopic Skills Simulator. Arthrosc Tech 2016; 5 (05) e1039-e1047
  • 11 Travassos TDC, Schneider-Monteiro ED, Santos AMD, Reis LO. Homemade laparoscopic simulator. Acta Cir Bras 2019; 34 (10) e201901006
  • 12 Nunes CP, Kulcheski AL, Almeida PA, Stieven Filho E, Graells XS. Creation of a lo-cost endoscopic flavectomy training model. Coluna/Columna 2020; 19 (03) 223-227
  • 13 Milcent PAA, Coelho ARR, Rosa SP. et al. Um simulador de artroscopia de joelho acessível. Rev Bras Educ Med 2020; 44 (01) e038
  • 14 Dau L, Almeida PA, Milcent PAA, Rosa FM, Kulcheski AL, Stieven Filho E. Artroscopia do ombro – Criação de um modelo de treinamento acessível. Rev Bras Ortop 2022; 57 (04) 702-708
  • 15 Henn III RF, Shah N, Warner JJ, Gomoll AH. Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model. Arthroscopy 2013; 29 (06) 982-985
  • 16 Vassiliou MC, Feldman LS, Andrew CG. et al. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 2005; 190 (01) 107-113
  • 17 Atesok K, Mabrey JD, Jazrawi LM, Egol KA. Surgical simulation in orthopaedic skills training. J Am Acad Orthop Surg 2012; 20 (07) 410-422
  • 18 Carter FJ, Schijven MP, Aggarwal R. et al; Work Group for Evaluation and Implementation of Simulators and Skills Training Programmes. Consensus guidelines for validation of virtual reality surgical simulators. Surg Endosc 2005; 19 (12) 1523-1532
  • 19 McDougall EM. Validation of surgical simulators. J Endourol 2007; 21 (03) 244-247
  • 20 Braman JP, Sweet RM, Hananel DM, Ludewig PM, Van Heest AE. Development and validation of a basic arthroscopy skills simulator. Arthroscopy 2015; 31 (01) 104-112
  • 21 Bouaicha S, Jentzsch T, Scheurer F, Rahm S. Validation of an Arthroscopic Training Device. Arthroscopy 2017; 33 (03) 651-658.e1
  • 22 Colaco HB, Hughes K, Pearse E, Arnander M, Tennent D. Construct Validity, Assessment of the Learning Curve, and Experience of Using a Low-Cost Arthroscopic Surgical Simulator. J Surg Educ 2017; 74 (01) 47-54
  • 23 Lopez G, Martin DF, Wright R. et al. Construct Validity for a Cost-effective Arthroscopic Surgery Simulator for Resident Education. J Am Acad Orthop Surg 2016; 24 (12) 886-894
  • 24 Ruparel RK, Brahmbhatt RD, Dove JC. et al. “iTrainers”–novel and inexpensive alternatives to traditional laparoscopic box trainers. Urology 2014; 83 (01) 116-120
  • 25 Rosenthal R, Gantert WA, Scheidegger D, Oertli D. Can skills assessment on a virtual reality trainer predict a surgical trainee's talent in laparoscopic surgery?. Surg Endosc 2006; 20 (08) 1286-1290
  • 26 Gomoll AH, O'Toole RV, Czarnecki J, Warner JJ. Surgical experience correlates with performance on a virtual reality simulator for shoulder arthroscopy. Am J Sports Med 2007; 35 (06) 883-888
  • 27 Dal Molin FF, Mothes FC, Feder MG. Effectiveness of the videoarthroscopy learning process in synthetic shoulder models. Rev Bras Ortop 2015; 47 (01) 83-91
  • 28 Kulcheski ÁL, Stieven-Filho E, Nunes CP, Milcent PAA, Dau L, I-Graells XS. Validation of an endoscopic flavectomy training model. Rev Col Bras Cir 2021; 48: e202027910
  • 29 Alvand A, Khan T, Al-Ali S, Jackson WF, Price AJ, Rees JL. Simple visual parameters for objective assessment of arthroscopic skill. J Bone Joint Surg Am 2012; 94 (13) e97
  • 30 Milcent PAA, Kulcheski AL, Rosa FM, Dau L, Stieven Filho E. Construct validity and experience of using a low-cost arthroscopic knee surgery simulator. J Surg Educ 2021; 78 (01) 292-301
  • 31 Tuijthof GJ, Visser P, Sierevelt IN, Van Dijk CN, Kerkhoffs GM. Does perception of usefulness of arthroscopic simulators differ with levels of experience?. Clin Orthop Relat Res 2011; 469 (06) 1701-1708

Zoom Image
Fig. 1 (a) Fixação da fita de simulação de manguito rotador, (b) Demonstração das posições das estruturas, (c) Posicionamento dos pontos de referência da glenóide. Fonte: Autor (2021).
Zoom Image
Fig. 1 (a) Fixation of the rotator cuff simulation tape, (b) Demonstration of the positions of the structures, (c) Positioning of the glenoid reference points. Source: Author (2021).
Zoom Image
Fig. 2 Demonstração do uso do modelo (a) Modelo de treinamento em artroscopia de ombro pronto; (b) modelo em uso com artroscópio; (c) exercício de triangulação com “probe” em decúbito lateral; (d) exercício de manipulação de tecido por tração do elástico (supra espinhal) em cadeira de praia. Fonte: Autor (2021).
Zoom Image
Fig. 2 Demonstration of model use (a) Ready-made shoulder arthroscopy training model; (b) model in use with arthroscope; (c) triangulation exercise with probe in lateral decubitus; (d) tissue manipulation exercise by elastic traction (supraspinal) in a beach chair. Source: Author (2021).
Zoom Image
Fig. 3 Comparação tempos intergrupos primeira tentativa. Fonte: O Autor (2021).
Zoom Image
Fig. 4 Comparação tempos intergrupos segunda tentativa. Fonte: O Autor (2021).
Zoom Image
Fig. 5 Variação escore GOALS entre o primeiro e segundo teste. Fonte: O Autor (2021).
Zoom Image
Fig. 3 Intergroup comparison, time in the first test. Source: The Author (2021). Comparação de tempos teste 1 = Time comparison, Test 1, Segundos = Seconds, Cirurgiões = Surgeons, Residentes = Residents, Acadêmicos = Medical students, Testes = Tests.
Zoom Image
Fig. 4 Intergroup comparison, time in the second test. Source: The Author (2021). Comparação de tempos teste 2 = Time comparison, Test 2, Segundos = Seconds, Cirurgiões = Surgeons, Residente = Residents, Acadêmicos = Medical students, Testes = Tests.
Zoom Image
Fig. 5 GOALS score variation between the first and second tests. Source: The Author (2021). GOALS primeiro e segundo testes = GOALS first and second tests, GOALS = GOALS, Cirurgiões T1 = Surgeons T1, Cirurgiões T2 = Surgeons T2, Residente T1 = Residents T1, Residente T2 = Residents T2, Acadêmicos T1 = Students T1, Acadêmicos T2 = Students T2, Testes = Tests.