Appl Clin Inform 2023; 14(03): 448-454
DOI: 10.1055/a-2065-4613
Research Article

Validation of an Automated Symptom-Based Triage Tool in Ophthalmology

Elana Meer#
1   Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, United States
2   Department of Ophthalmology, University of California San Francisco, San Francisco, California, United States
,
Meera S. Ramakrishnan#
1   Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, United States
,
Gideon Whitehead
1   Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, United States
,
Damien Leri
3   Center for Health Incentives and Behavioral Economics, University of Pennsylvania, Philadelphia, Pennsylvania, United States
4   Penn Medicine Center for Health Care Innovation, University of Pennsylvania Health System, Philadelphia, Pennsylvania, United States
,
Roy Rosin
3   Center for Health Incentives and Behavioral Economics, University of Pennsylvania, Philadelphia, Pennsylvania, United States
4   Penn Medicine Center for Health Care Innovation, University of Pennsylvania Health System, Philadelphia, Pennsylvania, United States
,
Brian VanderBeek
1   Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, United States
› Institutsangaben

Funding None.
 

Abstract

Objectives Acute care ophthalmic clinics often suffer from inefficient triage, leading to suboptimal patient access and resource utilization. This study reports the preliminary results of a novel, symptom-based, patient-directed, online triage tool developed to address the most common acute ophthalmic diagnoses and associated presenting symptoms.

Methods A retrospective chart review of patients who presented to a tertiary academic medical center's urgent eye clinic after being referred for an urgent, semi-urgent, or nonurgent visit by the ophthalmic triage tool between January 1, 2021 and January 1, 2022 was performed. Concordance between triage category and severity of diagnosis on the subsequent clinic visit was assessed.

Results The online triage tool was utilized 1,370 and 95 times, by the call center administrators (phone triage group) and patients directly (web triage group), respectively. Of all patients triaged with the tool, 8.50% were deemed urgent, 59.2% semi-urgent, and 32.3% nonurgent. At the subsequent clinic visit, the history of present illness had significant agreement with symptoms reported to the triage tool (99.3% agreement, weighted kappa = 0.980, p < 0.001). The triage algorithm also had significant agreement with the severity of the physician diagnosis (97.0% agreement, weighted kappa = 0.912, p < 0.001). Zero patients were found to have a diagnosis on exam that should have corresponded to a higher urgency level on the triage tool.

Conclusion The automated ophthalmic triage algorithm was able to safely and effectively triage patients based on symptoms. Future work should focus on the utility of this tool to reduce nonurgent patient load in urgent clinical settings and to improve access for patients who require urgent medical care.


#

Background and Significance

The reduced capacity imposed by the coronavirus disease 2019 (COVID-19) shutdown highlighted the importance of efficient and accurate triage of patients with ophthalmic complaints based on symptoms.[1] [2] However, the challenge of appropriately triaging ophthalmic patients was not new to the ophthalmic community. Requests for urgent ophthalmologic visits can arise from both providers (primary care physicians, optometrists, other physicians) or from patients themselves. Yet almost half of eye-related emergency department (ED) visits in United States are nonemergent,[3] and referring providers have been shown to accurately diagnose or triage patients less than 50% of the time,[4] [5] [6] highlighting the inherent difficulty in triaging ophthalmic conditions. The method of triage frequently involves a stepwise approach. Urgent concerns are commonly routed to an administrative assistant with varying levels of ophthalmic knowledge or ophthalmic medical staff (medical assistants, technicians, optometrists, or ophthalmologists). Unless dedicated “walk-in” patient slots are available, triaging—typically by ophthalmic medical staff—to establish appropriateness of an urgent visit based on the patient's current symptoms is required to avoid overwhelming limited resources.

Formalized triage protocols have proven helpful in identifying patients for whom prompt outpatient ophthalmic examination may be more safely considered.[7] However, the existing symptom checkers or tele-triage systems are limited by either accuracy or provider involvement.[8] [9] [10] Different methods of triage scoring (Rome Eye and Alphabetical Triage Score for Ophthalmology scoring) and computer-assisted self-triage have also been applied to ophthalmic emergency rooms in Europe; however, there has been limited uptake in the United States.[11] [12] [13] [14] These systems do, however, reinforce the idea that developing a triage tool or automated self-triage system could facilitate safe and efficacious triage resource and provider-limited situations.

In the setting of the acute need to reduce the use of ophthalmology-trained personnel to triage patient complaints, an automated ophthalmic symptoms triage tool was developed. Here we describe the development, validation, and implementation of a novel automated ophthalmic triage tool with the goal of safely and accurately triaging patient complaints and the urgency of such visits.


#

Methods

This was a single-center retrospective cohort study of all patients who utilized the automatic ophthalmic triage tool at the Scheie Eye Institute of the University of Pennsylvania. The Scheie Eye Institute is a tertiary referral academic eye center affiliated with the University of Pennsylvania Health System. Apart from individual provider clinics, the Scheie Eye Institute also has a daily ophthalmology urgent care clinic that serves urgent ophthalmic needs on a walk-in or appointment basis, via internal or external provider- or self-referral. Patients may walk in or call in to the departmental call center to be seen in this clinic and are added on to the day's schedule without any triaging guidelines until the clinic capacity is reached. At our institution, in particular, there is a generalized call center, resulting in limited and fluctuating funds of knowledge due to high staff turnover and shifting needs of the call center itself. For example, a staff member may be expected to respond to ophthalmology calls 1 day and orthopaedics calls the next. Thus, the urgent clinic often reaches capacity for a given day—in part due to the lack of triaging of patients who walk or call in. Frequently, once capacity is reached, call center personnel contact resident physicians to triage concerned patients appropriately. Prior to the development of the tool, the departmental call center, which is staffed by administrative assistants, had minimal clinical guidelines for triaging symptom-related calls. To address this issue, an automated triage algorithm was developed, using similar methodology to that seen in development and implementation of automated COVID-19 triage tools.[15] As a quality improvement project, this study was reviewed and deemed exempt from the University of Pennsylvania Institutional Review Board.

Development of the Triage Algorithm

A symptoms-based approach was used to create the triage algorithm by a U.S. ophthalmology resident (M.S.R.). Common presenting ophthalmic symptoms were used as the baseline of the algorithm tree, which included vision loss, eye pain, red eye, diplopia, flashes and floaters, tearing, eyelid or pupil changes. For each of these symptoms, the most common, urgent, and any “can't miss” diagnoses were considered to create a series of binary questions probing for duration, frequency, and relevant context that might identify the severity of the patient's condition to guide appropriate visit timing. These diagnoses were considered based on clinical experience, verified using the Wills Eye Manual for reference, and checked against common clinical diagnoses used to assess the accuracy of a popular online symptom checker by Shen et al.[8] [16] For example, symptoms of sudden/transient vision loss, new flashes and floaters, and history of recent trauma or eye surgery were in general prioritized. The algorithm eventually produced a triage recommendation: urgent (refer to urgent care clinic in the same day), semi-urgent (follow-up with ophthalmologist within 4 weeks), and nonurgent: (follow-up with ophthalmologist in 4 to 9 weeks). Care was taken to use plain language at the elementary school reading level for ease of patient and call center staff use.[17] The algorithm was then reviewed by three U.S. board-certified ophthalmologists who unanimously agreed on the decision-making tree. The algorithm is provided in [Supplementary Fig. S1] (available in the online version).

During initial development of the tool, initial validation was performed using 50 charts from walk-in clinic. A convenience sample of 50 consecutive charts of patients from the walk-in urgent care clinic was selected randomly to validate the triage tool. Randomization procedure included picking every other chart from the list of patient charts. Researchers completed the tool based on the patients' quoted chief complaints and history of present illness (HPI) from clinic notes. The results of the triage tool were then compared with the assessment and plan of the clinician to assess baseline efficacy of the tool prior to deploying it through the web portal and through the call center.


#

Implementation of the Triage Tool

The triage algorithm was developed into a web application that is publicly available.[18] Two modes of accessing the triage tool were developed. For patients who had web access and were able to navigate the simple Web site, they accessed the tool directly, selecting their symptoms and receiving a triage recommendation (the web triage group). For those that called our call center directly with symptoms, the call center representatives would use the same Web site to input the symptoms and reach a triage recommendation (the phone triage group). Due to the internet-based nature of the tool, regardless of method of use, an automated message was generated that included all pertinent patient demographics, patient responses, and the triage recommendation, which was then relayed to the electronic health record (EHR). Within the EHR, these messages were located in a specialized inbox that were only accessible to the administrative assistants and ophthalmic technicians who were responsible for scheduling appointments accordingly. As such, treating physicians were masked to the triage tool results at the time of the follow-up clinic encounter. For messages that indicated an urgent follow-up, an ophthalmic technician would follow up with the patient to address the urgent needs of the patient (either be seen in the urgent clinic or sent to the ED for after-hours care), and administrative assistants would address the rest.


#

Study Outcomes and Statistical Analysis

Factors assessed with regards to use of the triage tool included demographics, tool usage rates, and distribution of triage levels from the study period of January 1, 2021 to January 1, 2022 since the tool was deployed. The first clinic note after completion of the triage tool was reviewed to determine the primary outcomes of (1) concordance between the HPI presented in the clinic note and that reported to the triage tool and (2) concordance between the acuity of the diagnosis made by the clinician and the triage acuity designated by the triage tool. Agreement of these was assessed via weighted kappa measurement. Criteria for HPI match included the symptoms and timeline input into the tool as demonstrated by [Supplementary Fig. S1] (available in the online version). For example, for HPI match to occur, the same symptoms must have been reported to the tool as were reported to the clinician and written in the notes (e.g., vision loss/change in vision, flashes/floaters, double vision, red eye, eye pain, tearing, eyelid problem; [Supplementary Fig. S1], available in the online version). Similarly, the same timeline must have been reported to the triage tool as to the clinician as demonstrated by HPI in clinic notes (e.g., symptoms for < 2, 4, 6 wk; [Supplementary Fig. S1], available in the online version). To assess clinical severity, all eye-related diagnosis codes were prespecified as either vision threatening (e.g., corneal ulcer, retinal detachment, cellulitis), acute (e.g., corneal abrasion, posterior vitreous detachment, stye), chronic (e.g., cataract, glaucoma), or benign (e.g., subconjunctival hemorrhage, hidrocystoma). Diagnoses that were vision threatening or acute were expected to be associated with an urgent triage, whereas diagnoses that were benign or chronic were expected to be associated with semi-urgent or nonurgent triage to be considered as concordant. When appropriate, differences between assessments were noted as uptriages (where a provider assessed that a patient had a higher clinical severity than the tool assigned) and downtriages (where a provider assessed that a patient had a lower clinical severity than the tool assigned) were evaluated. As part of the concordance review process, the charts of all of web triage tool completions were chosen for this analysis. The charts of 10% of call center triage patients were chosen at random. Without knowing how frequently or infrequently we would find discrepancies between the triage recommendation and the chart, the authors were not able to do an a priori power analysis to guide us in sample size. Therefore, the 10% of call center patients were chosen as the size for the random sample due to investigator convenience with the goal of a similar yet reasonable sample size of the call center patients for chart review. Two judges (E.M. and M.S.R.) were involved in the assessments of agreement by kappa statistic. The prespecified clinical severity and triage acuity classification were utilized to reduce any confirmation bias by the adjudicators. Regarding disagreements, the plan was to discuss among the two judges and resolve with final approval from a third judge (B.V.). However, disagreements were not encountered. Descriptive statistics were calculated and reported. Weighted kappa values were generated using STATA 14.2 (College Station, Texas, United States).


#
#

Results

Pilot Testing and Primary Validation

For 43 out of 50 (86%) of the patient visits used for pilot testing of the tool, the triage tool triage acuity was the same as the diagnosis severity by the clinician. The remaining 7 patients (14%) were downtriaged by the provider, meaning that the tool was more cautious than the clinician by design, suggesting the patient seek more urgent care than was deemed necessary by the clinician (e.g., the tool suggested a same day instead of a semi-urgent appointment or a semi-urgent appointment instead of a nonurgent appointment as suggested by clinical diagnosis). Given the reassuring accuracy and appropriately conservative results of the algorithm, the triage tool was then deployed across the department through the web portal and for use by our call center representatives. Multiple meetings were employed to orient the call center and administrative assistants to the web app, who found the tool to be user-friendly. The administrative assistants and ophthalmic technicians were trained on how to access the call center message pool where triage results were sent and how to respond to these messages to schedule an appointment in a timely matter in accordance with the triage tool's recommended urgency.


#

Postimplementation Results

Once deployed, the triage tool was completed 1,465 times (1,370 in the phone triage group, 85 in the web triage group) over the 12-month study period. A total of 66.7% of all patients triaged with the tool were female, 50.7% identified as Black, 28.7% Caucasian, 4.4% Asian, and 9.8% Hispanic/Latino. Mean age was 51.6 years old (standard deviation [SD] ± 20.2) for all triaged patients, 43.1 years old (SD ± 17.1) for web triage users, and 57.2 years old (SD ± 20.2) for phone triage users. Using the tool, 8.50% of patients were deemed urgent, 59.2% semi-urgent, and 32.3% nonurgent ([Table 1]).

Table 1

Triage acuity pool distribution

Triage tool acuity

Urgent

Semi-urgent

Nonurgent

Patients triaged

Administrative staff triage tool (via call center; N = 1,370)

83 (6.10%)

829 (60.5%)

458 (33.4%)

Triage tool (N = 95)

42 (44.2%)

38 (40.0%)

15 (15.8%)

Total (N = 1,465)

125 (8.50%)

867 (59.2%)

473 (32.3%)

All 85 patients in the web triage group and a 10% random sample of the 1,370 in the phone triage group had charts reviewed for a total of 222 patients. Of these, 70 patients either did not respond to a follow-up phone call for visit scheduling or did not show up for their scheduled follow-up, leaving 152 patients who were able to have chart review. Based on the method of tool usage, a higher percentage of web app patients did not follow up (34/85 [40.0%]) compared with the phone triage patients (36/137 [25.5%]; p = 0.02). Of the 51 web triage patients who followed up, 24 (47%) were deemed urgent, 20 (39%) as semi-urgent, and 7 (14%) as nonurgent. The number of urgent was significantly higher than those who followed up using the phone triage; 10 (10%) urgent, 66 (65%) semi-urgent, and 25 (25%) nonurgent (p < 0.001).

Compared with the HPI, the triage tool performed well to match what the patient symptomatically described at the time of the visit ([Table 2]). Of these patients, only 1 out of the 152 patients had an HPI that did not match up with the triage tool and it was due to the patient reporting a longer duration of vision changes than was reported in the tool. This led to an overall agreement of 99.3% and a weighted kappa of 0.981 (p < 0.001). When comparing the performance of the triage tool to the clinician's diagnosis at the end of the follow-up visit, disconcordance was found in 9 patients, with 6 of the 34 patients triaged for an urgent visit found to have a semi-urgent diagnosis and 3 of the 86 patients triaged for a semi-urgent visit found to have a nonurgent issue on exam. Of note, providers did not know the triage result at time of visit, and therefore, the triage tool results did not affect the providers' diagnosis of urgency of the symptoms. Zero patients were found to have a diagnosis on exam that should have corresponded to a higher urgency level on the triage tool. This led to an overall agreement of 97.0% and a weighted kappa of 0.912 (p < 0.001). Little difference was seen between the agreement found between the triage tool and the clinical diagnosis based on the method of triage tool use (web-based agreement: 95.1%, weighted kappa = 0.8666, p < 0.001; phone-based triage agreement: 98.0%, weighted kappa = 0.929, p < 0.001).

Table 2

Concordance between triage tool acuity and clinician diagnosis severity for selected review of patients (85 web triage, 137 call patients [10% of unique call triage])[a]

Urgency of diagnosis based on clinician note

Urgent

Semi-urgent

Nonurgent

Total (with follow-up)

Triage tool acuity

Urgent

28

(82.4%)

6

(17.6%)

0

(0.0%)

34

(22.4%)

Semi-urgent

0

(0.0%)

83

(96.5%)

3

(3.49%)

86

(56.6%)

Nonurgent

0

0

32

(100%)

32

(21.0%)

Total

26

(18.4%)

89

(58.6%)

35

(23.0%)

152

a Thirty-six of the call triage patients did not follow up and 34 of the web triage patients did not follow up.



#
#

Discussion

In this study, we demonstrated the development, utility, and validation of an automated triage tool based on ophthalmic symptoms. Our results showed that this tool can safely triage patients to an appropriate timing of follow-up. While previous reports have described ophthalmic tele-triage systems due to the COVID-19 pandemic, our tool is novel in that it is automated and does not rely on a triaging ophthalmic medical provider, such as a nurse practitioner, resident physician, or ophthalmic technician.[5] [6] [9] [10] [19] Although its creation was necessitated due to the Scheie Eye Institute's COVID-19-related shutdown, the need and application for this tool exist regardless of “shutdown” status. It continues to be used daily in our offices helping to manage our urgent same-day clinic and can be applied to any office with limited resources for patient volume.

Most importantly, the triage tool demonstrated an ability to triage patients appropriately. Of note, this triage tool did not incorporate artificial intelligence; it just built the existing algorithm into a web-based automated decision tree. There was an overall 97% concordance between triage tool acuity and clinician (ophthalmology resident with oversite by board-certified ophthalmology attendings) assessment and plan. As an automated tool, the emphasis was placed on the tool being highly sensitive in detecting symptoms requiring an urgent visit at the cost of potentially uptriaging less urgent symptoms/conditions. Consequently, there were zero instances in practice where the tool recommended a lower triage level than what was clinically indicated; however, it is important to note that some patients triaged as semi-urgent or nonurgent may not have followed up with a provider afterward. Even so, this considerably outperforms the most popular available ophthalmic symptom checker, WebMD, which was demonstrated to attain only 26% diagnostic accuracy and inappropriately mistriaged 60% of emergent cases in a study by Shen et al.[8] The automated triage tool also fared comparably to other reported tele-triage systems in the United Kingdom and Paris that achieved a 0.3 and 1% mistriage rate, respectively, but those necessitated a triaging ophthalmology provider to be on the phone with patients.[9] [10]

While being sure not to miss an important diagnosis, our study also found that only 7.5% of patients with symptoms triaged as urgent or semi-urgent by the triage tool ultimately could have safely been triaged as a lower acuity. In this small group of patients, a more urgent visit than necessary was performed. In our opinion, however, this is a welcome trade-off for reducing overall clinical burden without undertriaging any urgent cases. Due to institutional constraints at the time of the COVID shutdown (after scrubbing of schedules during the shutdown, over 2,500 patients had visits canceled and were immediately placed on a wait list for future evaluation), the current triage levels were created to manage new symptoms as a same-day visit or for semi- or nonurgent cases, respectively. Depending on the current needs of any clinic utilizing this tool, it could easily be modified into urgent/nonurgent categories with no loss in effectiveness as some physicians with the clinical availability may not want to have patients wait >4 weeks for an exam after receiving a call about a symptom.

The COVID-19 pandemic was a significant impetus for implementing triage systems in ophthalmology departments throughout the world, but the use of this tool extends beyond limiting patient visits for social distancing. Our study found that 32% of all patient symptom calls were considered nonurgent, and only 8.5% of all calls were urgent. This is consistent with Scanzera et al who reported on a tele-triage system at an academic urgent eye clinic during the COVID-19 shelter-in-place period, in which 30% patient calls were nonurgent.[19] Effective triage systems can reduce unnecessary urgent visits and improve health care resource utilization, a need that has existed before and continues long after COVID-19 shutdowns. In one instance, Bourdon et al noted that teleconsultations enabled a 73% reduction in patient visits to the emergency ophthalmic department in France.[9] These highlight the potential for effective triage systems to reduce unnecessary emergency visits, which may allay patient fears of unnecessary health care costs while still improving health care resource utilization.

While these are important goals, it also must be noted that this may not translate to increased patient satisfaction. Some patients may be relieved to get automated feedback that their symptoms are nonurgent. However, others may derive comfort from speaking with or seeing a provider and may be dissatisfied by an automated response that their symptoms are nonurgent. In the subanalysis of the web triage group, 40% did not seek follow-up care, despite a third of those patients triaged as needing urgent visits; whereas in the subanalysis of the phone triage group, 25% did not follow up with a visit, but only 8% were deemed urgent. It is unclear why web triage patients urgently triaged were less likely to follow up. It is possible the need to wait for a subsequent phone call to schedule the exam was off-putting, and future versions of the triage tool with better EHR integration may allow for direct scheduling at completion of the tool. On the other hand, if the tool recommended urgent follow-up, they may have sought care outside of our institution. Of note, there are several eye institutions in our city, one of which includes a world-renowned 24-hour eye emergency room. It is possible that patients may have responded to our triage tool's urgent recommendation appropriately and sought care elsewhere. These factors may all play a role in why 30% of patients who used the tool did not follow up for a visit at our institution. Conversely, a nonurgent triage could have provided a level of justification to not seek care. As such, more work needs to be done to evaluate these contributors. Further analysis into this subset of patients will be crucial in understanding utilization behaviors as well as in comprehensively examining how this system may affect patients.

There are several additional limitations to this study important to discuss. First, the study was limited by inability to follow-up on those who may have started but did not finish the tool. It is possible that there was a subset of patients that may have been urgent, but instead of completing the tool, just went straight to the ED. This could have affected the overall results and possibly the level of agreement between provider and tool. Second, the concordance analysis may have been affected by some level of confirmation bias, as the adjudicators for the analysis were the ones involved in developing the triage algorithm. However, to mitigate this risk, a prespecified classification scheme was used to determine agreement between the tool's triage acuity and clinical diagnosis severity. It is also possible that the triage tool's series of questions for each symptom could ultimately have influenced how the patient reported their symptoms at the subsequent clinic encounter, which could also have introduced confirmation bias in HPI concordance analysis. Indeed, the tool's algorithm was formulated to mimic how a clinician typically approaches a particular type of ophthalmic symptom with associated follow-up questions. In addition, it was beyond the scope of this study to evaluate the possibility that both the tool and the clinician may have incorrectly assessed the patient.

It is also important for future studies to further investigate why the urgent level rate of call center triage tool was substantially lower than that found in the web triage tool. Another key feature of future iterations of the tool will be employing multilingual translations, exploring the ability of the tool to handle non-native English speakers to better accommodate a diverse patient population. Finally, to confirm noninferiority or equivalence to human triage (no triage tool), future studies with a two-armed prospective randomized trial are necessary to deliver better evidence on the performance of the algorithm. Similarly, this study could not quantify any effect of the tool to minimize the number of nonurgent visits brought into an academic tertiary medical center's urgent eye care clinic; therefore, future work with a comparison group is necessary to determine whether the tool reduces the number of patients directed to urgent eye care clinic or otherwise reduces clinic burden.

This study shows the utility of an automated ophthalmic triage tool to screen the urgency of visit requests. This tool was validated and found to be highly correlated with both the initial presentation of illness as well as the final diagnosis. With the increasing availability and validity of home-based testing, basic “eye vital signs” such as visual acuity, visual fields, and intraocular pressure, future studies should look to refine the triaging process that incorporates these clinical data into the algorithms.


#

Clinical Relevance Statement

Automated, algorithmic triage tools are becoming increasingly common in health care, both as stand-alone tools and for clinical decision support. This research discusses in detail an approach for designing, implementing, and validating such a tool in ophthalmology with insights to streamline triage of acute ophthalmic complaints. A symptom-based patient-directed automated triage algorithm was able to safely and effectively triage patients based on symptoms, with implications for improving access for patients who require urgent medical care and improving triage for patients with nonurgent concerns.


#

Multiple-Choice Questions

  1. Why is there a need for automated symptom-based triage tools in ophthalmology?

    • Inefficient triage for acute symptoms

    • Nonclinical personnel involved in immediate triage

    • Suboptimal access and resource utilization

    • All of the above

    Correct Answer: The correct answer is option d. As highlighted in this article, acute care ophthalmic clinics often suffer from inefficient triage, leading to suboptimal patient access and resource utilization. Automated symptom-based triage tools have been applied in other health care settings to streamline triage but have not yet been applied to urgent care eye clinics.

  2. Which of the following is an important consideration in designing an algorithm for a new automated triage tool?

    • Creation of new or deviation from existing guideline-directed best practices

    • Conservative design that minimizes false negatives

    • Less efficient triage

    • Speech of launch prioritized over validation and testing of algorithm

    Correct Answer: The correct answer is option b. Conservative design minimizing false negatives is integral for prioritizing patient safety. This may result in over-referral of patients to live clinicians in situations of ambiguity; however, this conservative approach is integral for the implementation of new tools to ensure that patients are not told to stay home or have delayed follow-up in situations where they should be seen more urgently.

The other answers are incorrect as algorithms supporting automated triage tools should follow existing guideline-directed best practices, improve efficiency without sacrificing safety or accuracy, and undergo extensive prelaunch testing.


#
#

Conflict of Interest

None declared.

Note

This study was presented at the Association for Research in Vision and Ophthalmology Meeting 2022.


Ethics Statement

This study was deemed exempt by the University of Pennsylvania Institutional Review Board as it was considered a quality improvement study with no risk or minimal risk to subjects, with all secondary analyses performed on nonidentifiable data.


Authors' Contributions

All authors contributed to the planning, conducting, and reporting of the work described in the article. M.S.R. and E.M. as guarantors for content accept full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish. E.M. attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted. All listed authors have approved the present version of the manuscript.


Protection of Human and Animal Subjects

No interventions were performed on human subjects.


# Co-first authors. Elana Meer and Meera S. Ramakrishnan contributed equally to this work.


Supplementary Material

  • References

  • 1 CDC. Social distancing, quarantine, and isolation. CDC.gov, 2020. Accessed April 14, 2023 at: https://www.cdc.gov/quarantine/index.html
  • 2 Saleem SM, Pasquale LR, Sidoti PA, Tsai JC. Virtual ophthalmology: telemedicine in a COVID-19 era. Am J Ophthalmol 2020; 216: 237-242
  • 3 Channa R, Zafar SN, Canner JK, Haring RS, Schneider EB, Friedman DS. Epidemiology of eye-related emergency department visits. JAMA Ophthalmol 2016; 134 (03) 312-319
  • 4 Nari J, Allen LH, Bursztyn LLCD. Accuracy of referral diagnosis to an emergency eye clinic. Can J Ophthalmol 2017; 52 (03) 283-286
  • 5 Docherty G, Hwang J, Yang M. et al. Prospective analysis of emergency ophthalmic referrals in a Canadian tertiary teaching hospital. Can J Ophthalmol 2018; 53 (05) 497-502
  • 6 Deaner JD, Amarasekera DC, Ozzello DJ. et al. Accuracy of referral and phone-triage diagnoses in an eye emergency department. Ophthalmology 2021; 128 (03) 471-473
  • 7 Shen BY, Salman AR, Shah SM. et al. Clinical outcomes following implementation of a formalized “flashes and floaters” emergency department triage protocol. Am J Ophthalmol 2022; 242 (00) 125-130
  • 8 Shen C, Nguyen M, Gregor A, Isaza G, Beattie A. Accuracy of a popular online symptom checker for ophthalmic diagnoses. JAMA Ophthalmol 2019; 137 (06) 690-692
  • 9 Bourdon H, Jaillant R, Ballino A. et al. Teleconsultation in primary ophthalmic emergencies during the COVID-19 lockdown in Paris: experience with 500 patients in March and April 2020. J Fr Ophtalmol 2020; 43 (07) 577-585
  • 10 Chen Y, Ismail R, Cheema MR, Ting DSJ, Masri I. Implementation of a new telephone triage system in ophthalmology emergency department during COVID-19 pandemic: clinical effectiveness, safety and patient satisfaction. Eye (Lond) 2022; 36 (05) 1126-1128
  • 11 Eijk ESV, Bettink-Remeijer MW, Timman R, Busschbach JJV. From pen-and-paper questionnaire to a computer-assisted instrument for self-triage in the ophthalmic emergency department: process and validation. Comput Biol Med 2015; 66: 258-262
  • 12 Eijk ESV, Wefers Bettink-Remeijer M, Timman R, Heres MHB, Busschbach JJV. Criterion validity of a computer-assisted instrument of self-triage (ca-ISET) compared to the validity of regular triage in an ophthalmic emergency department. Int J Med Inform 2016; 85 (01) 61-67
  • 13 Rossi T, Boccassini B, Iossa M, Mutolo MG, Lesnoni G, Mutolo PA. Triaging and coding ophthalmic emergency: the Rome Eye Scoring System for Urgency and Emergency (RESCUE): a pilot study of 1,000 eye-dedicated emergency room patients. Eur J Ophthalmol 2007; 17 (03) 413-417
  • 14 D'Oria F, Bordinone MA, Rizzo T. et al. Validation of a new system for triage of ophthalmic emergencies: the Alphabetical Triage Score for Ophthalmology (ATSO). Int Ophthalmol 2020; 40 (09) 2291-2296
  • 15 Meer EA, Herriman M, Lam D. et al. Design, implementation, and validation of an automated, algorithmic COVID-19 triage tool. Appl Clin Inform 2021; 12 (05) 1021-1028
  • 16 Gerstenblith AT, Rabinowitz MP. The Wills Eye Manual: Office and Emergency Room Diagnosis and Treatment of Eye Disease. Philadelphia, PA: Lippincott Williams & Wilkins; 1990
  • 17 Kubba H. Reading skills of otolaryngology outpatients: implications for information provision. J Laryngol Otol 2000; 114 (09) 694-696
  • 18 Penn Medicine. Penn Medicine Direct. Accessed July 19, 2022 at: https://direct.pennmedicine.org/vision
  • 19 Scanzera AC, Chang AY, Valikodath N. et al. Assessment of a novel ophthalmology tele-triage system during the COVID-19 pandemic. BMC Ophthalmol 2021; 21 (01) 346

Address for correspondence

Elana Meer, MD, MBA
Department of Ophthalmology, University of California San Francisco
490 Illinois Street, San Francisco, CA 94158
United States   
eMail: elana.meer@ucsf.edu   

Publikationsverlauf

Eingereicht: 30. November 2022

Angenommen: 27. März 2023

Accepted Manuscript online:
29. März 2023

Artikel online veröffentlicht:
07. Juni 2023

© 2023. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 CDC. Social distancing, quarantine, and isolation. CDC.gov, 2020. Accessed April 14, 2023 at: https://www.cdc.gov/quarantine/index.html
  • 2 Saleem SM, Pasquale LR, Sidoti PA, Tsai JC. Virtual ophthalmology: telemedicine in a COVID-19 era. Am J Ophthalmol 2020; 216: 237-242
  • 3 Channa R, Zafar SN, Canner JK, Haring RS, Schneider EB, Friedman DS. Epidemiology of eye-related emergency department visits. JAMA Ophthalmol 2016; 134 (03) 312-319
  • 4 Nari J, Allen LH, Bursztyn LLCD. Accuracy of referral diagnosis to an emergency eye clinic. Can J Ophthalmol 2017; 52 (03) 283-286
  • 5 Docherty G, Hwang J, Yang M. et al. Prospective analysis of emergency ophthalmic referrals in a Canadian tertiary teaching hospital. Can J Ophthalmol 2018; 53 (05) 497-502
  • 6 Deaner JD, Amarasekera DC, Ozzello DJ. et al. Accuracy of referral and phone-triage diagnoses in an eye emergency department. Ophthalmology 2021; 128 (03) 471-473
  • 7 Shen BY, Salman AR, Shah SM. et al. Clinical outcomes following implementation of a formalized “flashes and floaters” emergency department triage protocol. Am J Ophthalmol 2022; 242 (00) 125-130
  • 8 Shen C, Nguyen M, Gregor A, Isaza G, Beattie A. Accuracy of a popular online symptom checker for ophthalmic diagnoses. JAMA Ophthalmol 2019; 137 (06) 690-692
  • 9 Bourdon H, Jaillant R, Ballino A. et al. Teleconsultation in primary ophthalmic emergencies during the COVID-19 lockdown in Paris: experience with 500 patients in March and April 2020. J Fr Ophtalmol 2020; 43 (07) 577-585
  • 10 Chen Y, Ismail R, Cheema MR, Ting DSJ, Masri I. Implementation of a new telephone triage system in ophthalmology emergency department during COVID-19 pandemic: clinical effectiveness, safety and patient satisfaction. Eye (Lond) 2022; 36 (05) 1126-1128
  • 11 Eijk ESV, Bettink-Remeijer MW, Timman R, Busschbach JJV. From pen-and-paper questionnaire to a computer-assisted instrument for self-triage in the ophthalmic emergency department: process and validation. Comput Biol Med 2015; 66: 258-262
  • 12 Eijk ESV, Wefers Bettink-Remeijer M, Timman R, Heres MHB, Busschbach JJV. Criterion validity of a computer-assisted instrument of self-triage (ca-ISET) compared to the validity of regular triage in an ophthalmic emergency department. Int J Med Inform 2016; 85 (01) 61-67
  • 13 Rossi T, Boccassini B, Iossa M, Mutolo MG, Lesnoni G, Mutolo PA. Triaging and coding ophthalmic emergency: the Rome Eye Scoring System for Urgency and Emergency (RESCUE): a pilot study of 1,000 eye-dedicated emergency room patients. Eur J Ophthalmol 2007; 17 (03) 413-417
  • 14 D'Oria F, Bordinone MA, Rizzo T. et al. Validation of a new system for triage of ophthalmic emergencies: the Alphabetical Triage Score for Ophthalmology (ATSO). Int Ophthalmol 2020; 40 (09) 2291-2296
  • 15 Meer EA, Herriman M, Lam D. et al. Design, implementation, and validation of an automated, algorithmic COVID-19 triage tool. Appl Clin Inform 2021; 12 (05) 1021-1028
  • 16 Gerstenblith AT, Rabinowitz MP. The Wills Eye Manual: Office and Emergency Room Diagnosis and Treatment of Eye Disease. Philadelphia, PA: Lippincott Williams & Wilkins; 1990
  • 17 Kubba H. Reading skills of otolaryngology outpatients: implications for information provision. J Laryngol Otol 2000; 114 (09) 694-696
  • 18 Penn Medicine. Penn Medicine Direct. Accessed July 19, 2022 at: https://direct.pennmedicine.org/vision
  • 19 Scanzera AC, Chang AY, Valikodath N. et al. Assessment of a novel ophthalmology tele-triage system during the COVID-19 pandemic. BMC Ophthalmol 2021; 21 (01) 346