RSS-Feed abonnieren
DOI: 10.1055/s-0043-1775565
Engaging Multidisciplinary Clinical Users in the Design of an Artificial Intelligence–Powered Graphical User Interface for Intensive Care Unit Instability Decision Support
- Abstract
- Background and Significance
- Objectives
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple-Choice Questions
- References
Abstract
Background Critical instability forecast and treatment can be optimized by artificial intelligence (AI)-enabled clinical decision support. It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care.
Objectives Our objective is to engage multidisciplinary users (physicians, nurse practitioners, physician assistants) in the development of a graphical user interface (GUI) to present an AI-derived risk score.
Methods Intensive care unit (ICU) clinicians participated in focus groups seeking input on instability risk forecast presented in a prototype GUI. Two stratified rounds (three focus groups [only nurses, only providers, then combined]) were moderated by a focus group methodologist. After round 1, GUI design changes were made and presented in round 2. Focus groups were recorded, transcribed, and deidentified transcripts independently coded by three researchers. Codes were coalesced into emerging themes.
Results Twenty-three ICU clinicians participated (11 nurses, 12 medical providers [3 mid-level and 9 physicians]). Six themes emerged: (1) analytics transparency, (2) graphical interpretability, (3) impact on practice, (4) value of trend synthesis of dynamic patient data, (5) decisional weight (weighing AI output during decision-making), and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication and optimal GUI location. While providers emphasized need for recommendation interpretability and concern for impairing trainee critical thinking. All disciplines valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy.
Conclusion Gaining input from all clinical users is important to consider when designing AI-derived GUIs. Results highlight that health care intelligent decisional support systems technologies need to be transparent on how they work, easy to read and interpret, cause little disruption to current workflow, as well as decisional support components need to be used as an adjunct to human decision-making.
#
Background and Significance
Artificial intelligence (AI) uses computer algorithms to resemble human-like thinking. In health care, AI has been posited to integrate temporally diverse, multidimensional data from numerous sources to predict outcomes and subsidize intelligent decisional support systems (IDSS).[1] Decisional support systems are unbiased and can recognize hidden patterns in large quantities of patient data.[2] This structure is also conceptually referred to as a learning health care system (LHS), the framework suggests that technology can broaden workflows to improve care (horizon 1), establish links to data and analytics (horizon 2), and can be integrated into a digital platform (horizon 3).[1] [3] Although health care IDSS has frequently been successful at demonstrating conceptual sense and internal validation in the research setting, few have achieved successful external and prospective validation.[1] [2] [4] The success of IDSS is critically dependent upon two elements: (1) The validity and reliability of the model's outcome prediction (back-end) and (2) How its derived information is embedded into clinical workflow and presented to clinicians for decisional processing (front end).[5] This user-facing front end is commonly communicated via a graphical user interface (GUI).[6] [7] [8] It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care in the intensive care unit (ICU).
Researchers must consider that clinical settings are complex and fast-paced. As they continue to recommend adding new technologies into these environments that are believed to support clinician thinking and decisions, they also need to consider workflow, human factors, and administrative support.[9] Research teams should challenge themselves to develop GUIs which best translate and communicate IDSS information to multiprofessional clinical end users to optimize field testing and usability findings. If interactive GUIs display patient data in ways that are visually effective, stakeholders will more intuitively respond to GUI outputs.[10] We therefore proposed developing a methodology to obtain input from diverse disciplinary groupings of ICU clinicians with varying levels of responsibility and immediacy to patient care into early GUI design.[1] [11] [12] [13]
Instability Model Development
Our study purpose is to describe the use of focus groups and the resultant input from stratified ICU care disciplines into a GUI design. We will provide a brief summary of model development for context only, as this is not the immediate focus of this report. We continue to use machine learning to develop an instability risk score from continuous vital sign data and identified physiological explanations for the score. The risk score will provide clinicians with an instability forecast based on patient trajectory of physiological derangement, as well as probable cause based on feature pattern recognition. This information will inform treatment recommendations that support proactive, rather than reactive interventions to mitigate instability risk (early warning system). The instability risk score considers continuous vital sign data collected from the electronic health record (EHR) for each patient. When the trended vital signs fall outside of normal thresholds, the risk for instability increases. Administered medications and fluid boluses are also included in the prediction algorithm to further refine the risk for instability by considering patient response to these common ICU interventions. A variety of our predictive models that differentiate between real versus artifactual instability and risk score development have been described elsewhere.[14] [15] [16]
#
#
Objectives
Our team persists with AI model development using continuously generated physiological monitoring and EHR data. This data is used to predict the risk of future cardiorespiratory instability in critically ill patients and provide decisional support to probable cause and treatment options.[1] [17] While work on algorithm development progresses, we are simultaneously preparing an early prototype GUI to communicate the prediction model output to clinicians before efficacy testing and trial. The objective for this study is to use focus groups to gather multiprofessional ICU clinician input to iteratively design the GUI prototype.
#
Methods
Study Design
With institutional review board approval, a qualitative focus group study elicited design insights from clinical end users to iteratively develop the GUI prototype. Details garnered from online focus groups included information type and front-end display felt to support recognition of patient instability risk and next steps for recommended clinical interventions. Iterative design changes were made until thematic saturation was reached.
#
Graphical User Interface Prototype
In tandem with model development, we created a GUI prototype that would present risk information to clinicians in the ICU setting. Often ICU clinicians are responding to crisis level patient care needs. Graphical displays that provide up-to-date trended data that draw attention to the most critical details can support faster decision-making.[10] [18] [19] Prior to conducting focus groups, a static GUI prototype display was drafted based on clinical and technical research team member insights. They hypothesized the GUI information needed to inform impressions of patient instability risk, explanations of physiological contributors to that risk, and treatment recommendations to guide therapeutic decision-making. The purpose of this static GUI prototype was to provide a visual starting place for focus group participants to use while they provided their design change recommendations. Specifically, the static GUI did not have any interactive components; however, it did have all components that would be present in the interactive version (they were just presented in a static fashion). For example, the static GUI included a status section that displayed fluid responsiveness, arterial tone, and cardiac performance ([Fig. 1]). The static prototype also includes an action section that provides recommended interventions for the patient given their hemodynamic status. Additionally, there is a forecasting index that provides a cardiorespiratory instability risk score with color gradient. Lastly, the static GUI prototype presents longitudinally trended vital sign data to further support clinician decision-making.


#
Study Participants
Eligible participants were licensed clinicians: registered nurses (RN) or providers with prescribing authority (nurse practitioner [NP], physician assistant [PA], or physicians [DO or MD]) who had experience caring for continuously monitored hospitalized ICU patients.
#
Setting
This virtual focus group study occurred online and recruited clinicians who worked at a single urban tertiary care center in northeastern United States. The clinicians worked in acute care settings that included the emergency department or critical care units.
#
Participant Recruitment
Multiple recruitment methods were employed to maximize clinician participation: (1) Purposive recruitment was accomplished via scripted emails sent to listservs and targeted contact groups (nurses and providers, graduate nursing students from our School of Nursing, and nursing education/research groups); (2) Key stakeholder support was secured to assist with in-person recruitment; (3) Announcements were made at nursing practice councils and during individual hospital unit rounds to advertise study participation opportunities to clinicians; and (4) Snowball sampling was used and participants were encouraged to invite eligible colleagues. Participants could participate in more than one round of focus groups, if desired.
#
Data Collection
Participants were assigned to focus groups of only peers (nurse only or provider only) and then hybrid groups (both nurses and providers together). This enabled researchers to assess if peer-only results differed from hybrid group results. Each focus group was facilitated by an expert focus group methodologist using a semistructured moderator guide and three notetakers. We used up to three notetakers during every focus group to ensure that notes, transcripts, concepts, future codes, and themes all accurately and reliably reflected what individual notetakers were qualitatively assessing. Audio-only recordings were collected to validate transcription accuracy, and no identifying participant information was collected. Audio recordings were deleted after the final thematic analysis. Participants were assigned a pseudonym screen name in Zoom (e.g., RN1-1-1). Each focus group lasted approximately 60 minutes and participants were asked to complete an anonymous demographic survey thereafter. During the first round of focus groups, the initial GUI prototype was displayed, and end-user design recommendations were requested from the participants via open-ended questions asked by the facilitator ([Table 1]).
Abbreviation: GUI, graphical user interface.
#
Data Analysis
Analysis proceeded using the automated transcriptions (data) generated from six Zoom audio recordings (two rounds, with three groups [RN, provider, hybrid] in each round) and handwritten notes. After each focus group session, the facilitator and notetakers debriefed to highlight commonly heard conceptual ideas and potential codes for code book development. Transcripts from each round were coded, reviewed, and analyzed using an inductive content analysis approach.[20] After every focus group round was completed, each notetaker coded one of the three transcripts; each recorded line of transcript was reviewed to identify repetition in concepts, themes, and specific recommendations for GUI technical changes. Next, the three notetakers met to review and adjudicate codes to cross-check appropriate use of codes. Doubts or disagreements were discussed until agreement was reached (consensus coding).[21] Codes were then categorized and themes germane to all focus group discussions were derived from these data. After the first round of focus groups, GUI design changes were made to the initial GUI prototype and presented in the next round. Focus groups were repeated until thematic saturation and optimal GUI technical design changes were achieved.
#
#
Results
Twenty-three participants were recruited (11 RN, 2 NP, 1 PA, 9 MD). A total of 62% of participants were aged 31 to 40 years, 76% were female, and most participants (67%) had ≤ 10 years of clinical experience ([Table 2]). Focus groups occurred over an approximate 2-week period, and [Table 3] shows the participant distribution over the study period. Thematic analysis showed that six themes emerged: (1) analytics transparency; (2) graphical interpretability; (3) impact on practice; (4) value of trend synthesis of dynamic patient data; (5) decisional weight (weighing AI output during decision-making); and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication of changing patient condition with providers and optimal GUI location. Providers emphasized the need for interpretability of IDSS recommendations and concern for impairing trainee critical thinking. Both groups valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy in testing and practice. Thematic saturation was achieved, and feedback informed two iterative GUI design versions (Initial Version 1 and then substantive changes in Versions 2a and 2b).
Variable |
N (%) N = 23[a] |
---|---|
Age (y) |
|
< 25 |
2 (9) |
25–30 |
1 (5) |
31–40 |
13 (62) |
41–50 |
3 (14) |
51–60 |
2 (9) |
Gender |
|
Female |
16 (76) |
Male |
5 (24) |
Professional background |
|
Registered nurse |
11 (48) |
Physician |
9 (39) |
Nurse practitioner or physician assistant |
3 (13) |
Years of experience |
|
1–5 |
5 (24) |
6–10 |
8 (38) |
11–15 |
4 (19) |
16–20 |
3 (14) |
> 20 |
1 (5) |
Notes: Focus group participant characteristics across all groups.
a Data were missing on two participants; so, percentages were based on valid sample size for each variable.
A total of 23 participants |
Group 1 (nurse only) |
Group 2 (provider only) |
Group 3 (hybrid) |
---|---|---|---|
Round 1 |
•2 RN |
•3 Physician |
•2 RN •3 Physician |
Time spent in between round 1 and round 2 was used for incorporating GUI design feedback. Recommended changes from round 1 were applied so that the most current GUI version was presented in round 2 |
|||
Round 2 |
•4 RN |
•1 NP •1 PA •2 Physician |
•3 RN •1 NP •1 Physician |
Abbreviations: GUI, graphical user interface; NP, nurse practitioner; PA, physician assistant; RN, registered nurse.
Notes: The table displays the distribution of participants for all focus groups, separated into two rounds. Each round was constructed based on participant availability.
a A total of 23 participants.
Theme 1: Analytics Transparency
There were a wide range of participant comments for defining analytics transparency. Some participants wanted to ensure that the IDSS would capture the heterogeneity of various patient presentations. Validating the IDSS output with clinician assessment findings and cardiorespiratory monitor vital signs were also valued as a key evidential component. Here the participants acknowledged that the IDSS output is only as valuable and accurate as the data that are put into it. Further emphasis focused on how the IDSS creates the risk prediction score. Comments favored the characterization of fluid responsiveness, arterial tone, and cardiac performance, as the clinicians would have some insight about why the IDSS was making a particular risk prediction (avoidance of the “black box” phenomenon). The spectrum of insights included desires to see the highest level of evidence/transparency derived from randomized clinical trial results before any meaningful use would be enacted. [Table 4] includes themes, codes, direct participant quotes, and GUI design changes applied.
#
Theme 1 Specific Graphical User Interface Design Changes
Participants requested a scroll bar feature and a 3-day view option to appreciate longer-term trends, including changes in vital signs, risk scores, and interventions applied.
#
Theme 2: Graphical Interpretability
This theme reflects participant desires to influence the technology end-user design “friendliness” element. Comments manifested participant abilities to imagine themselves actively using the GUI in their clinical environments. Many participants were displeased with the recommended “Action” section of the initial GUI prototype that was presented to them. They felt that their professional autonomy would be disrupted if they were asked to follow the IDSS “Actions” and not their own clinical judgement. Other technical design considerations were focused on the “Status” section of the GUI. Participants struggled to understand arrow directionally and their meaning regarding the listed clinical indicators (fluid responsiveness, arterial tone, and cardiac performance). Participants also shared requests for simplicity. The initial GUI prototype was noted to be visually busy. Overall, comments were positive, and the clinical participants mentioned unforeseen benefits such as: (1) using the GUI for telemedicine purposes and (2) appreciation for the communication of all pertinent data in one place. [Fig. 1] represents the initial GUI protype (Version 1) presented to round 1 participants.
#
Theme 2 Specific Graphical User Interface Design Changes
Theme 2 captures a generous amount of design changes requested by participants that improved GUI interpretability ([Figs. 2A] and [B]). For GUI prototype version 2 specifically, much focus was oriented toward the addition of the “Intervention” section. Participants requested a hover-and-discover feature, a “last updated” indicator for medications and fluids administered, as well as the ability for the clinicians to customize medications visualized on the GUI screen. It was evident that our clinical participants were vested, and interests were aligned with the research team in their desire to lessen patient harm by recognizing and intervening on patient instability sooner than later.


#
Theme 3: Impact on Practice
Both nurses and providers shared concerns about the influential impact of the GUI on novice clinicians. Concepts ranged from the GUI helping and hindering novice practice. For example, positive commentary included the benefit of the GUI helping newer clinicians to recognize patient deterioration before a crisis event. This idea also aligned with gaining the confidence to call for help as the novice would now have an objective measure to report. However, conflicting comments focused on how the GUI would hinder the development of critical thinking skills and how novice clinicians may overly rely on the IDSS output without thinking for themselves. For the more advanced clinician, they felt that the GUI could provide the evidence/confidence needed to back up their clinical intuition when patient deterioration is looming while vital sign changes are often delayed in presentation. The GUI was also referred to as a communication tool for the nurses to use when summarizing and translating their patient impression to medical team members when escalating a concern, as well as during hand-off and shift change. Several barriers were noted. Participants were concerned about the following: (1) AI technologies taking over human-oriented tasks; (2) they observed the length of time that it took to explain how to use the GUI and stated their worries about feasibility in the clinical environment (explanations for use will have to be < 5 minutes); (3) prioritizing their workload (will the GUI create additional work?); and (4) excessive financial cost (will this technology actually provide a benefit to the patient and not just add to existing expenses?).
#
Theme 3 Specific Graphical User Interface Design Changes
There were no specific changes for this theme.
#
Theme 4: Value of Trend Synthesis of Dynamic Patient Data
Integration of vital sign data and medical interventions side-by-side in a real-time view was repeatedly related as an extremely popular key feature of the GUI. Clinicians commented on the perceived time this design detail could potentially save. They imagined having this integrated information available while assessing their patient during a period of instability and how unique this view is. Currently, clinicians collate disparate data in many different locations in the medical record. Although data may be centralized in a single medical record, clinicians still toggle back and forth between different flowsheets to find the data needed to paint an all-inclusive view of their patient. This takes time and attention to detail, and during a patient emergency this level of specificity is not innately feasible. Overall, presenting patient status information (vital signs) alongside completed medical interventions (e.g., medication administration, fluid boluses, etc.) was perceived as a benefit for participant clinical practice. An advantage that could potentially save time, improve workflow, and decrease workload in already demanding clinical environments.
#
Theme 4 Specific Graphical User Interface Design Changes
One change was requested for this theme. Next to the “Recommendation” section, the addition of a clock or timestamp to show the time of the “last updated” IDSS “Recommended” intervention performed.
#
Theme 5: Decisional Weight (Weighing Artificial Intelligence Output during Decision-Making)
Participants were reluctant to view the IDSS predictions communicated through the GUI as an initial alerting system. They imagined their own clinical decision-making and recognition processes would occur first, as if they already knew patient instability was occurring. The IDSS predictions communicated through the GUI would be used as an adjunct to validate their decisions driven by intuition or their “gut-feelings.” Interestingly, in a scenario where the clinician did not know what was driving patient instability or what the appropriate intervention should be, then the IDSS predictions communicated through the GUI would be used as a first pass “consultant” and not validation for decision-making. This theme shares some overlap with theme 2 and 3 (graphical interpretability and impact on practice) where it has been clearly stated by participants that they have concerns about allotting any portion of their autonomous clinical decision-making to an AI technology. Much work will need to focus on key stakeholder buy-in and clinical translation at the bedside for successful and meaningful use to occur.
#
Theme 5 Specific Graphical User Interface Design Changes
There were no specific changes for this theme.
#
Theme 6: Display Location (Usability, Concerns for Patient/Family Graphical User Interface View)
Very thoughtful comments supported the emergence of theme 6. The focus group semistructured moderator guide did not prompt specific comments about patient family member considerations for the GUI display or location. Rather, participant comments evolved together and much time/attention was spent in this space. Participants reflected on how observant families are while supporting their loved one at the bedside. Concerns were voiced about adding yet another piece of technology to the repertoire of bedside health care technologies, especially one that provides medical intervention recommendations and an instability risk score. Clinicians were concerned about how the GUI could encourage family member distrust if the providers or nurses did not follow through with the IDSS recommendations. Although families are savvy, without lengthy nursing or medical training they will be limited in their abilities to dynamically understand why a clinician may or may not follow through with an IDSS prediction/recommendation.
#
Theme 6 Specific Graphical User Interface Design Changes
One design change was requested for this theme. A blackout feature for the right side of screen where the “Recommendation” section is located, to prevent overwhelming families with complex medical information. The blackout feature would simply hide this side of the GUI screen but could be opened by a clinician if they needed to review current GUI information.
#
#
Discussion
Six themes emerged from multidisciplinary focus groups: (1) analytics transparency; (2) graphical interpretability; (3) impact on practice; (4) value of trend synthesis of dynamic patient data; (5) decisional weight (weighing AI output during decision-making); and (6) display location (usability, concerns for patient/family GUI view). These themes represented the concepts that clinicians focused on the most, and although they are distinct, they share some overlap.
The multidisciplinary overlapping areas were: (1) Themes 2, 3, and 5 all captured the notion that clinician autonomy should be preserved when interpreting GUI graphics, performing clinical practice duties, and especially when making decisions about patient instability risk and applying appropriate interventions. (2) Participants shared common opinions about the perceived benefit of vital sign and current interventions integration; this sentiment crossed over between themes 2 and 4. This makes sense as theme 4 was solely dedicated to dynamic patient data integration and theme 2 focused on graphical interpretability. (3) Clinical intuition also appeared in more than one theme. Themes 3 and 5 reflected comments about the value of intuition, both from a clinical practice and decisional weight standpoint.
Findings from our study were encouraging, enabling us to synthesize other investigator results to identify commonalities and differences. Langkjaer et al sought to discover what nurses' experiences were with early warning systems embedded within their EHR.[22] Ultimately, nurses found these tools to be inflexible but useful in recognizing patient deterioration. This study shared findings that were like ours. When stratified, nurses greatly appreciated the shared language allowances with physicians, which could promote enhanced communication. Risk scoring systems were thought to be helpful for novice nurses and not as helpful for experienced nurses. Interestingly, the scoring systems in the Langkjaer et al study were viewed to have improved patient deterioration detection when integrated with the nurses' “gut-feeling” (intuition) that a serious adverse event was about to happen. Similarities were further validated by a study performed by McParland et al, where they specifically focused on the “gut-instinct” and the importance of being able to depart from differential diagnosis decision support system recommendations.[23] These findings underscore the premise and potential of AI in health care: to serve as a tool that can augment, not supplant, the skills and expertise of health care providers.[24] Participants in our study were very much interested in maintaining their clinical decision autonomy, as well.
Other commonalities between our study and others included clinician concerns that there could be an overreliance of the IDSS and decline of independent critical thinking potentially leading to clinicians missing other important patient-oriented information.[6] [22] [23] Additionally, scoring, early warning systems, or graphical displays should allow clinicians to tailor or customize what they are visualizing. This could mean customizing individual patient profiles or desired clinical information viewing.[1] [6] [22] [25] [26] [27]
Low levels of clinician trust in AI performance and a desire for algorithmic transparency have been identified as major barriers to the adoption and effective use of AI tools in health care[1] [28] [29] [30] and these concerns were strongly voiced by our multidisciplinary participants as well. Our findings underscore recommendations by the National Academy of Medicine to incorporate instruction on how to appropriately assess and use AI tools in health care professional training programs and in continuing education for current practitioners.[11] As health care knowledge and patient generated data continues to grow exponentially, health care providers must be equipped with the ability to critically appraise AI tools and then integrate and leverage the insights they provide into management and treatment decisions.[24] [31]
Next, there were findings in recent literature not found in our study. Researchers performed a focus group study to gain insight about physician, advanced practice nurse, and the general public's perceptions of a differential diagnosis decision support system technology intended for primary care use.[23] Clinician comments not heard in our study were oriented toward litigation. Participants voiced concerns about overriding the differential diagnosis decision support system and the risks of future litigation. Although the participants in our study did not mention concerns about litigation, they did make comments about overriding the IDSS and damaging trusting relationships between themselves and their patients' family members.
These comments progressed into a full theme (theme 6) not appreciated in current literature—IDSS clinical practice considerations for patient family members. Some family members spend entire days at the beside and it should be expected that they would pay attention to all of the graphics and alarms that are produced by health care technologies. We also encourage family member involvement in patient care now that we know that patient outcomes improve with family presence at the bedside.[32] [33] Our participants spent thoughtful time considering family members and the implications of seeing clinician interaction with the GUI (i.e., following through with recommended interventions or not and visualizing a constant risk score). Participants were concerned not only about negatively impacting the dynamic medical/nursing team and patient/family relationship, but they were also concerned about generating unnecessary and compounding anxieties that could be derived from a continuously presented instability risk score. These concerns were coupled with apprehensions about the feasibility of clinicians learning how to use the GUI and its impact on existing workloads. This further highlights the fact that the value of information offered by an IDSS can be nullified by the disruption caused by that system. Researchers should consider these findings when they are planning for IDSS implementation. In the future, patients and family members should be invited to participate in focus group studies so that researchers can qualitatively assess how they might interact with these new decisional support systems. Although the IDSS is intended for clinical use, this study importantly shows that patient and family member input needs to be considered before the implementation phase occurs.
Strengths and Limitations
Strengths of this study included the recruitment of multiple professional disciplines to assemble commonalities and differences in viewpoints according to care roles. Our study design expands on current literature as we recruited both nurses and physicians in a single study and stratified the focus groups to solicit disciplinary input singly and combined. We recruited all disciplines who would have interactions with the GUI. These disciplines included nurses, nurse practitioners, physicians, and physician assistants. Critical care medicine providers work together in dynamic teams where each member has something unique to offer, and yet they all share commonalities relative to care delivery. Their multidisciplinary input was absolutely necessary to engage in this work. The critical care setting is also unique in health care delivery, and our approach could be used as a template of what is valued by health care professionals operating in a highly time-critical, high-stakes, and data-rich environment. As an added bonus, we also had computer scientist partnerships that complimented this clinician-driven research.
Weaknesses include limiting viewpoints from a single center. Generalizability is limited in this early design phase, but as the GUI is further evaluated (1:1 usability session and field testing), feedback from clinicians at additional clinical sites, practice areas, and health systems will be targeted. Nevertheless, the information acquired in this early phase was very helpful in making numerous design changes in our prototype GUI even before we begin single-center clinical field testing. By involving end users in design at this preliminary stage, efficacy testing may go more smoothly by improving user-friendliness and eliminating some design barriers in advance.
#
#
Conclusion
Although many IDSS technologies fail, rarely is such failure due to technological flaws. Instead, IDSS technologies mainly fail due to lack of consideration for human interaction elements (trust, usability, and organizational/clinical workflow in IDSS design and implementation processes). We found that engaging multidisciplinary clinicians early in iterative IDSS development was helpful for identifying diverse insights needed to support human-centered design for all eventual users, especially factors associated with AI acceptance. These findings are critically important for helping researchers design tools that will be accepted by the multidisciplinary clinical workforce to optimally leverage potential AI benefits.
#
Clinical Relevance Statement
Early development clinical opinion highlighted that health care IDSS technologies need to be transparent on how they work, easy to read and interpret, facilitate rather than disrupt workflow, and that decisional support components need to be used as a supplement and not replace human decision-making. Every clinical environment is nuanced; leveraging frontline clinical input should be top priority for leadership teams who drive institutional change. Utilizing robust qualitative focus group study methods from all disciplinary end users made it possible for researchers to discover these clinically relevant details, which will be applied to future GUI display modifications and implementation.
#
Multiple-Choice Questions
-
Some themes shared overlap in this study, of note were multidisciplinary clinician concerns for loss of their professional autonomy if asked to implement and use a bedside IDSS. What specific request was made regarding changes to the GUI to remedy this concern?
-
Change the “Action” section
-
Change the “Status” section
-
Change the “Forecast” section
-
Change the “Vitals” section
Correct Answer: The correct answer is option a. Many participants were displeased with the recommended “Action” section of the initial GUI prototype that was presented to them. They felt that their professional autonomy would be disrupted if they were asked to follow the IDSS “Actions” and not their own clinical judgement.
-
-
What are two concrete concerns mentioned by the clinician participants related to family presence at the bedside where an IDSS would be in place?
-
Family members might press buttons on the GUI and family member potential to inadvertently turn off the GUI screen
-
Clinicians follow through with recommended interventions or not and patients/families visualizing a constant risk score
-
Alarm fatigue and decreased anxiety
-
Family members might video record the GUI screen and consider litigation
Correct Answer: The correct answer is option b. Our participants spent thoughtful time considering family members and the implications of seeing clinician interaction with the GUI (i.e., following through with recommended interventions or not and visualizing a constant risk score). Participants were concerned not only about negatively impacting the dynamic medical/nursing team and patient/family relationship, but they were also concerned about generating unnecessary and compounding anxieties that could be derived from a continuously presented instability risk score.
-
#
#
Conflict of Interest
None declared.
Acknowledgments
We would like to thank the clinician participants for their time and commitment to improving patient safety and outcomes. Their time away from patient care is so very appreciated. Without their contributions, useful bedside technologies would not be possible to design and deploy in the clinical environment.
Protection of Human and Animal Subjects
The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by the Institutional Review Board.
-
References
- 1 Lim HC, Austin JA, van der Vegt AH. et al. Toward a learning health care system: a systematic review and evidence-based conceptual framework for implementation of clinical analytics in a digital hospital. Appl Clin Inform 2022; 13 (02) 339-354
- 2 Helman SM, Herrup EA, Christopher AB, Al-Zaiti SS. The role of machine learning applications in diagnosing and assessing critical and non-critical CHD: a scoping review. Cardiol Young 2021; 31 (11) 1770-1780
- 3 Sullivan C, Staib A, McNeil K, Rosengren D, Johnson I. Queensland digital health clinical charter: a clinical consensus statement on priorities for digital health in hospitals. Aust Health Rev 2020; 44 (05) 661-665
- 4 Patel VL, Shortliffe EH, Stefanelli M. et al. The coming of age of artificial intelligence in medicine. Artif Intell Med 2009; 46 (01) 5-17
- 5 Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA 2018; 320 (21) 2199-2200
- 6 Calzoni L, Clermont G, Cooper GF, Visweswaran S, Hochheiser H. Graphical presentations of clinical data in a learning electronic medical record. Appl Clin Inform 2020; 11 (04) 680-691
- 7 Cannesson M, Hofer I, Rinehart J. et al. Machine learning of physiological waveforms and electronic health record data to predict, diagnose and treat haemodynamic instability in surgical patients: protocol for a retrospective study. BMJ Open 2019; 9 (12) e031988
- 8 Helman S, Terry MA, Pellathy T. et al. Engaging clinicians early during the development of a graphical user display of an intelligent alerting system at the bedside. Int J Med Inform 2022; 159: 104643
- 9 Porter A, Dale J, Foster T, Logan P, Wells B, Snooks H. Implementation and use of computerised clinical decision support (CCDS) in emergency pre-hospital care: a qualitative study of paramedic views and experience using strong structuration theory. Implement Sci 2018; 13 (01) 91
- 10 Fareed N, Swoboda CM, Chen S, Potter E, Wu DTY, Sieck CJUS. U.S. COVID-19 state government public dashboards: an expert review. Appl Clin Inform 2021; 12 (02) 208-221
- 11 Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication Washington, DC: National Academy of Medicine; 2019: 154
- 12 Bersani K, Fuller TE, Garabedian P. et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01) 34-45
- 13 Merkel MJ, Edwards R, Ness J. et al. Statewide real-time tracking of beds and ventilators during coronavirus disease 2019 and beyond. Crit Care Explor 2020; 2 (06) e0142
- 14 Chen L, Ogundele O, Clermont G, Hravnak M, Pinsky MR, Dubrawski AW. Dynamic and personalized risk forecast in step-down units. implications for monitoring paradigms. Ann Am Thorac Soc 2017; 14 (03) 384-391
- 15 Yoon JH, Mu L, Chen L. et al. Predicting tachycardia as a surrogate for instability in the intensive care unit. J Clin Monit Comput 2019; 33 (06) 973-985
- 16 Yoon JH, Jeanselme V, Dubrawski A, Hravnak M, Pinsky MR, Clermont G. Prediction of hypotension events with physiologic vital sign signatures in the intensive care unit. Crit Care 2020; 24 (01) 661
- 17 Barnett A, Winning M, Canaris S, Cleary M, Staib A, Sullivan C. Digital transformation of hospital quality and safety: real-time data for real-time action. Aust Health Rev 2019; 43 (06) 656-661
- 18 Limousin P, Azzabi R, Berge L, Dubois H, Truptil S, Gall LL. How to build dashboards for collecting and sharing relevant informations to the strategic level of crisis management: an industrial use case. 2019 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM). 2019:1–8
- 19 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
- 20 Kyngäs H. Inductive Content Analysis. The Application of Content Analysis in Nursing Science Research. Springer; 2020: 13-21
- 21 Kurtzman G, Dine J, Epstein A. et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med 2017; 12 (09) 743-746
- 22 Langkjaer CS, Bove DG, Nielsen PB, Iversen KK, Bestle MH, Bunkenborg G. Nurses' experiences and perceptions of two early warning score systems to identify patient deterioration-a focus group study. Nurs Open 2021; 8 (04) 1788-1796
- 23 McParland CR, Cooper MA, Johnston B. Differential diagnosis decision support systems in primary and out-of-hours care: a qualitative analysis of the needs of key stakeholders in Scotland. J Prim Care Community Health 2019; 10: 2150132719829315
- 24 Lomis KP, Jeffries A, Palatta M. , et al. Artificial Intelligence for Health Professions Educators. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC; 2021
- 25 Fletcher GS, Aaronson BA, White AA, Julka R. Effect of a real-time electronic dashboard on a rapid response system. J Med Syst 2017; 42 (01) 5
- 26 Schall Jr MC, Cullen L, Pennathur P, Chen H, Burrell K, Matthews G. Usability evaluation and implementation of a health information technology dashboard of evidence-based quality indicators. Comput Inform Nurs 2017; 35 (06) 281-288
- 27 Franklin A, Gantela S, Shifarraw S. et al. Dashboard visualizations: supporting real-time throughput decision-making. J Biomed Inform 2017; 71: 211-221
- 28 Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020; 323 (06) 509-510
- 29 Mlaver E, Schnipper JL, Boxer RB. et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12) 676-685
- 30 Paulson SS, Dummett BA, Green J, Scruth E, Reyes V, Escobar GJ. What do we do after the pilot is done? Implementation of a hospital early warning system at scale. Jt Comm J Qual Patient Saf 2020; 46 (04) 207-216
- 31 Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics 2019; 21 (02) E146-E152
- 32 Strathdee SA, Hellyar M, Montesa C, Davidson JE. The power of family engagement in rounds: an exemplar with global outcomes. Crit Care Nurse 2019; 39 (05) 14-20
- 33 Goldfarb MJ, Bibas L, Bartlett V, Jones H, Khan N. Outcomes of patient-and family-centered care interventions in the ICU: a systematic review and meta-analysis. Crit Care Med 2017; 45 (10) 1751-1761
Address for correspondence
Publikationsverlauf
Eingereicht: 15. März 2023
Angenommen: 26. Juli 2023
Artikel online veröffentlicht:
04. Oktober 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Lim HC, Austin JA, van der Vegt AH. et al. Toward a learning health care system: a systematic review and evidence-based conceptual framework for implementation of clinical analytics in a digital hospital. Appl Clin Inform 2022; 13 (02) 339-354
- 2 Helman SM, Herrup EA, Christopher AB, Al-Zaiti SS. The role of machine learning applications in diagnosing and assessing critical and non-critical CHD: a scoping review. Cardiol Young 2021; 31 (11) 1770-1780
- 3 Sullivan C, Staib A, McNeil K, Rosengren D, Johnson I. Queensland digital health clinical charter: a clinical consensus statement on priorities for digital health in hospitals. Aust Health Rev 2020; 44 (05) 661-665
- 4 Patel VL, Shortliffe EH, Stefanelli M. et al. The coming of age of artificial intelligence in medicine. Artif Intell Med 2009; 46 (01) 5-17
- 5 Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA 2018; 320 (21) 2199-2200
- 6 Calzoni L, Clermont G, Cooper GF, Visweswaran S, Hochheiser H. Graphical presentations of clinical data in a learning electronic medical record. Appl Clin Inform 2020; 11 (04) 680-691
- 7 Cannesson M, Hofer I, Rinehart J. et al. Machine learning of physiological waveforms and electronic health record data to predict, diagnose and treat haemodynamic instability in surgical patients: protocol for a retrospective study. BMJ Open 2019; 9 (12) e031988
- 8 Helman S, Terry MA, Pellathy T. et al. Engaging clinicians early during the development of a graphical user display of an intelligent alerting system at the bedside. Int J Med Inform 2022; 159: 104643
- 9 Porter A, Dale J, Foster T, Logan P, Wells B, Snooks H. Implementation and use of computerised clinical decision support (CCDS) in emergency pre-hospital care: a qualitative study of paramedic views and experience using strong structuration theory. Implement Sci 2018; 13 (01) 91
- 10 Fareed N, Swoboda CM, Chen S, Potter E, Wu DTY, Sieck CJUS. U.S. COVID-19 state government public dashboards: an expert review. Appl Clin Inform 2021; 12 (02) 208-221
- 11 Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication Washington, DC: National Academy of Medicine; 2019: 154
- 12 Bersani K, Fuller TE, Garabedian P. et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01) 34-45
- 13 Merkel MJ, Edwards R, Ness J. et al. Statewide real-time tracking of beds and ventilators during coronavirus disease 2019 and beyond. Crit Care Explor 2020; 2 (06) e0142
- 14 Chen L, Ogundele O, Clermont G, Hravnak M, Pinsky MR, Dubrawski AW. Dynamic and personalized risk forecast in step-down units. implications for monitoring paradigms. Ann Am Thorac Soc 2017; 14 (03) 384-391
- 15 Yoon JH, Mu L, Chen L. et al. Predicting tachycardia as a surrogate for instability in the intensive care unit. J Clin Monit Comput 2019; 33 (06) 973-985
- 16 Yoon JH, Jeanselme V, Dubrawski A, Hravnak M, Pinsky MR, Clermont G. Prediction of hypotension events with physiologic vital sign signatures in the intensive care unit. Crit Care 2020; 24 (01) 661
- 17 Barnett A, Winning M, Canaris S, Cleary M, Staib A, Sullivan C. Digital transformation of hospital quality and safety: real-time data for real-time action. Aust Health Rev 2019; 43 (06) 656-661
- 18 Limousin P, Azzabi R, Berge L, Dubois H, Truptil S, Gall LL. How to build dashboards for collecting and sharing relevant informations to the strategic level of crisis management: an industrial use case. 2019 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM). 2019:1–8
- 19 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
- 20 Kyngäs H. Inductive Content Analysis. The Application of Content Analysis in Nursing Science Research. Springer; 2020: 13-21
- 21 Kurtzman G, Dine J, Epstein A. et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med 2017; 12 (09) 743-746
- 22 Langkjaer CS, Bove DG, Nielsen PB, Iversen KK, Bestle MH, Bunkenborg G. Nurses' experiences and perceptions of two early warning score systems to identify patient deterioration-a focus group study. Nurs Open 2021; 8 (04) 1788-1796
- 23 McParland CR, Cooper MA, Johnston B. Differential diagnosis decision support systems in primary and out-of-hours care: a qualitative analysis of the needs of key stakeholders in Scotland. J Prim Care Community Health 2019; 10: 2150132719829315
- 24 Lomis KP, Jeffries A, Palatta M. , et al. Artificial Intelligence for Health Professions Educators. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC; 2021
- 25 Fletcher GS, Aaronson BA, White AA, Julka R. Effect of a real-time electronic dashboard on a rapid response system. J Med Syst 2017; 42 (01) 5
- 26 Schall Jr MC, Cullen L, Pennathur P, Chen H, Burrell K, Matthews G. Usability evaluation and implementation of a health information technology dashboard of evidence-based quality indicators. Comput Inform Nurs 2017; 35 (06) 281-288
- 27 Franklin A, Gantela S, Shifarraw S. et al. Dashboard visualizations: supporting real-time throughput decision-making. J Biomed Inform 2017; 71: 211-221
- 28 Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020; 323 (06) 509-510
- 29 Mlaver E, Schnipper JL, Boxer RB. et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12) 676-685
- 30 Paulson SS, Dummett BA, Green J, Scruth E, Reyes V, Escobar GJ. What do we do after the pilot is done? Implementation of a hospital early warning system at scale. Jt Comm J Qual Patient Saf 2020; 46 (04) 207-216
- 31 Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics 2019; 21 (02) E146-E152
- 32 Strathdee SA, Hellyar M, Montesa C, Davidson JE. The power of family engagement in rounds: an exemplar with global outcomes. Crit Care Nurse 2019; 39 (05) 14-20
- 33 Goldfarb MJ, Bibas L, Bartlett V, Jones H, Khan N. Outcomes of patient-and family-centered care interventions in the ICU: a systematic review and meta-analysis. Crit Care Med 2017; 45 (10) 1751-1761



