Appl Clin Inform 2023; 14(04): 789-802
DOI: 10.1055/s-0043-1775565
Research Article

Engaging Multidisciplinary Clinical Users in the Design of an Artificial Intelligence–Powered Graphical User Interface for Intensive Care Unit Instability Decision Support

Stephanie Helman
1   Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Martha Ann Terry
2   Department of Behavioral and Community Health Sciences, Graduate School of Public Health, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Tiffany Pellathy
3   Veterans Administration Center for Health Equity Research and Promotion, Pittsburgh, Pennsylvania, United States
,
Marilyn Hravnak
1   Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Elisabeth George
4   Department of Nursing, University of Pittsburgh Medical Center, Presbyterian Hospital, Pittsburgh, Pennsylvania, United States
,
Salah Al-Zaiti
1   Department of Acute and Tertiary Care Nursing, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
5   Department of Emergency Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
6   Division of Cardiology at University of Pittsburgh, Pittsburgh, Pennsylvania, United States
,
Gilles Clermont
7   Department of Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
› Institutsangaben
Funding U.S. Department of Health and Human Services; National Institutes of Health; National Institute of Nursing Research (5F31NR019725-02).
 

Abstract

Background Critical instability forecast and treatment can be optimized by artificial intelligence (AI)-enabled clinical decision support. It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care.

Objectives Our objective is to engage multidisciplinary users (physicians, nurse practitioners, physician assistants) in the development of a graphical user interface (GUI) to present an AI-derived risk score.

Methods Intensive care unit (ICU) clinicians participated in focus groups seeking input on instability risk forecast presented in a prototype GUI. Two stratified rounds (three focus groups [only nurses, only providers, then combined]) were moderated by a focus group methodologist. After round 1, GUI design changes were made and presented in round 2. Focus groups were recorded, transcribed, and deidentified transcripts independently coded by three researchers. Codes were coalesced into emerging themes.

Results Twenty-three ICU clinicians participated (11 nurses, 12 medical providers [3 mid-level and 9 physicians]). Six themes emerged: (1) analytics transparency, (2) graphical interpretability, (3) impact on practice, (4) value of trend synthesis of dynamic patient data, (5) decisional weight (weighing AI output during decision-making), and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication and optimal GUI location. While providers emphasized need for recommendation interpretability and concern for impairing trainee critical thinking. All disciplines valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy.

Conclusion Gaining input from all clinical users is important to consider when designing AI-derived GUIs. Results highlight that health care intelligent decisional support systems technologies need to be transparent on how they work, easy to read and interpret, cause little disruption to current workflow, as well as decisional support components need to be used as an adjunct to human decision-making.


#

Background and Significance

Artificial intelligence (AI) uses computer algorithms to resemble human-like thinking. In health care, AI has been posited to integrate temporally diverse, multidimensional data from numerous sources to predict outcomes and subsidize intelligent decisional support systems (IDSS).[1] Decisional support systems are unbiased and can recognize hidden patterns in large quantities of patient data.[2] This structure is also conceptually referred to as a learning health care system (LHS), the framework suggests that technology can broaden workflows to improve care (horizon 1), establish links to data and analytics (horizon 2), and can be integrated into a digital platform (horizon 3).[1] [3] Although health care IDSS has frequently been successful at demonstrating conceptual sense and internal validation in the research setting, few have achieved successful external and prospective validation.[1] [2] [4] The success of IDSS is critically dependent upon two elements: (1) The validity and reliability of the model's outcome prediction (back-end) and (2) How its derived information is embedded into clinical workflow and presented to clinicians for decisional processing (front end).[5] This user-facing front end is commonly communicated via a graphical user interface (GUI).[6] [7] [8] It is important that the user-facing display of AI output facilitates clinical thinking and workflow for all disciplines involved in bedside care in the intensive care unit (ICU).

Researchers must consider that clinical settings are complex and fast-paced. As they continue to recommend adding new technologies into these environments that are believed to support clinician thinking and decisions, they also need to consider workflow, human factors, and administrative support.[9] Research teams should challenge themselves to develop GUIs which best translate and communicate IDSS information to multiprofessional clinical end users to optimize field testing and usability findings. If interactive GUIs display patient data in ways that are visually effective, stakeholders will more intuitively respond to GUI outputs.[10] We therefore proposed developing a methodology to obtain input from diverse disciplinary groupings of ICU clinicians with varying levels of responsibility and immediacy to patient care into early GUI design.[1] [11] [12] [13]

Instability Model Development

Our study purpose is to describe the use of focus groups and the resultant input from stratified ICU care disciplines into a GUI design. We will provide a brief summary of model development for context only, as this is not the immediate focus of this report. We continue to use machine learning to develop an instability risk score from continuous vital sign data and identified physiological explanations for the score. The risk score will provide clinicians with an instability forecast based on patient trajectory of physiological derangement, as well as probable cause based on feature pattern recognition. This information will inform treatment recommendations that support proactive, rather than reactive interventions to mitigate instability risk (early warning system). The instability risk score considers continuous vital sign data collected from the electronic health record (EHR) for each patient. When the trended vital signs fall outside of normal thresholds, the risk for instability increases. Administered medications and fluid boluses are also included in the prediction algorithm to further refine the risk for instability by considering patient response to these common ICU interventions. A variety of our predictive models that differentiate between real versus artifactual instability and risk score development have been described elsewhere.[14] [15] [16]


#
#

Objectives

Our team persists with AI model development using continuously generated physiological monitoring and EHR data. This data is used to predict the risk of future cardiorespiratory instability in critically ill patients and provide decisional support to probable cause and treatment options.[1] [17] While work on algorithm development progresses, we are simultaneously preparing an early prototype GUI to communicate the prediction model output to clinicians before efficacy testing and trial. The objective for this study is to use focus groups to gather multiprofessional ICU clinician input to iteratively design the GUI prototype.


#

Methods

Study Design

With institutional review board approval, a qualitative focus group study elicited design insights from clinical end users to iteratively develop the GUI prototype. Details garnered from online focus groups included information type and front-end display felt to support recognition of patient instability risk and next steps for recommended clinical interventions. Iterative design changes were made until thematic saturation was reached.


#

Graphical User Interface Prototype

In tandem with model development, we created a GUI prototype that would present risk information to clinicians in the ICU setting. Often ICU clinicians are responding to crisis level patient care needs. Graphical displays that provide up-to-date trended data that draw attention to the most critical details can support faster decision-making.[10] [18] [19] Prior to conducting focus groups, a static GUI prototype display was drafted based on clinical and technical research team member insights. They hypothesized the GUI information needed to inform impressions of patient instability risk, explanations of physiological contributors to that risk, and treatment recommendations to guide therapeutic decision-making. The purpose of this static GUI prototype was to provide a visual starting place for focus group participants to use while they provided their design change recommendations. Specifically, the static GUI did not have any interactive components; however, it did have all components that would be present in the interactive version (they were just presented in a static fashion). For example, the static GUI included a status section that displayed fluid responsiveness, arterial tone, and cardiac performance ([Fig. 1]). The static prototype also includes an action section that provides recommended interventions for the patient given their hemodynamic status. Additionally, there is a forecasting index that provides a cardiorespiratory instability risk score with color gradient. Lastly, the static GUI prototype presents longitudinally trended vital sign data to further support clinician decision-making.

Zoom Image
Fig. 1 Initial graphical user interface (version 1) prototype presented to round 1 participants. The “Forecast” risk score displayed in the right side of figure represents an individual's current relative risk for instability, and the dotted line shows their future risk trajectory projected for the next half hour. Under the “Status” heading on the left side of the screen, the colorful ribbons are meant to provide clinicians with awareness of physiological states, which may be contributing to instability. Green reflects no instability risk, yellow indicates mild instability risk (patient evaluation warranted, but the risk is not urgent), and red signifies risk for serious instability requiring urgent intervention and perhaps escalation of care. Finally, the bottom area of the screen recommends “Actions” based upon the patient's current and projected state and is revised at 5-minute intervals.

#

Study Participants

Eligible participants were licensed clinicians: registered nurses (RN) or providers with prescribing authority (nurse practitioner [NP], physician assistant [PA], or physicians [DO or MD]) who had experience caring for continuously monitored hospitalized ICU patients.


#

Setting

This virtual focus group study occurred online and recruited clinicians who worked at a single urban tertiary care center in northeastern United States. The clinicians worked in acute care settings that included the emergency department or critical care units.


#

Participant Recruitment

Multiple recruitment methods were employed to maximize clinician participation: (1) Purposive recruitment was accomplished via scripted emails sent to listservs and targeted contact groups (nurses and providers, graduate nursing students from our School of Nursing, and nursing education/research groups); (2) Key stakeholder support was secured to assist with in-person recruitment; (3) Announcements were made at nursing practice councils and during individual hospital unit rounds to advertise study participation opportunities to clinicians; and (4) Snowball sampling was used and participants were encouraged to invite eligible colleagues. Participants could participate in more than one round of focus groups, if desired.


#

Data Collection

Participants were assigned to focus groups of only peers (nurse only or provider only) and then hybrid groups (both nurses and providers together). This enabled researchers to assess if peer-only results differed from hybrid group results. Each focus group was facilitated by an expert focus group methodologist using a semistructured moderator guide and three notetakers. We used up to three notetakers during every focus group to ensure that notes, transcripts, concepts, future codes, and themes all accurately and reliably reflected what individual notetakers were qualitatively assessing. Audio-only recordings were collected to validate transcription accuracy, and no identifying participant information was collected. Audio recordings were deleted after the final thematic analysis. Participants were assigned a pseudonym screen name in Zoom (e.g., RN1-1-1). Each focus group lasted approximately 60 minutes and participants were asked to complete an anonymous demographic survey thereafter. During the first round of focus groups, the initial GUI prototype was displayed, and end-user design recommendations were requested from the participants via open-ended questions asked by the facilitator ([Table 1]).

Table 1

Focus group questions

Question 1

What do you think about the GUI screen you are looking at? In general, what is your initial reaction?

Question 2

Are there things you would change on the GUI screen to make it more helpful, informative, usable?

Question 3

What would keep you from taking advantage of this technology?

Question 4

What would make you feel more comfortable using this technology?

Question 5

How would having a tool like this make you feel?

Question 6

Do you see this GUI supporting your future informed decision-making? Do you think people will use it?

Abbreviation: GUI, graphical user interface.



#

Data Analysis

Analysis proceeded using the automated transcriptions (data) generated from six Zoom audio recordings (two rounds, with three groups [RN, provider, hybrid] in each round) and handwritten notes. After each focus group session, the facilitator and notetakers debriefed to highlight commonly heard conceptual ideas and potential codes for code book development. Transcripts from each round were coded, reviewed, and analyzed using an inductive content analysis approach.[20] After every focus group round was completed, each notetaker coded one of the three transcripts; each recorded line of transcript was reviewed to identify repetition in concepts, themes, and specific recommendations for GUI technical changes. Next, the three notetakers met to review and adjudicate codes to cross-check appropriate use of codes. Doubts or disagreements were discussed until agreement was reached (consensus coding).[21] Codes were then categorized and themes germane to all focus group discussions were derived from these data. After the first round of focus groups, GUI design changes were made to the initial GUI prototype and presented in the next round. Focus groups were repeated until thematic saturation and optimal GUI technical design changes were achieved.


#
#

Results

Twenty-three participants were recruited (11 RN, 2 NP, 1 PA, 9 MD). A total of 62% of participants were aged 31 to 40 years, 76% were female, and most participants (67%) had ≤ 10 years of clinical experience ([Table 2]). Focus groups occurred over an approximate 2-week period, and [Table 3] shows the participant distribution over the study period. Thematic analysis showed that six themes emerged: (1) analytics transparency; (2) graphical interpretability; (3) impact on practice; (4) value of trend synthesis of dynamic patient data; (5) decisional weight (weighing AI output during decision-making); and (6) display location (usability, concerns for patient/family GUI view). Nurses emphasized having GUI objective information to support communication of changing patient condition with providers and optimal GUI location. Providers emphasized the need for interpretability of IDSS recommendations and concern for impairing trainee critical thinking. Both groups valued synthesized views of vital signs, interventions, and risk trends but were skeptical of placing decisional weight on AI output until proven trustworthy in testing and practice. Thematic saturation was achieved, and feedback informed two iterative GUI design versions (Initial Version 1 and then substantive changes in Versions 2a and 2b).

Table 2

Participant characteristics (N = 23)

Variable

N (%)

N = 23[a]

Age (y)

 < 25

2 (9)

 25–30

1 (5)

 31–40

13 (62)

 41–50

3 (14)

 51–60

2 (9)

Gender

 Female

16 (76)

 Male

5 (24)

Professional background

 Registered nurse

11 (48)

 Physician

9 (39)

 Nurse practitioner or physician assistant

3 (13)

Years of experience

 1–5

5 (24)

 6–10

8 (38)

 11–15

4 (19)

 16–20

3 (14)

 > 20

1 (5)

Notes: Focus group participant characteristics across all groups.


a Data were missing on two participants; so, percentages were based on valid sample size for each variable.


Table 3

Focus group setup[a]

A total of 23 participants

Group 1 (nurse only)

Group 2 (provider only)

Group 3 (hybrid)

Round 1

•2 RN

•3 Physician

•2 RN

•3 Physician

Time spent in between round 1 and round 2 was used for incorporating GUI design feedback. Recommended changes from round 1 were applied so that the most current GUI version was presented in round 2

Round 2

•4 RN

•1 NP

•1 PA

•2 Physician

•3 RN

•1 NP

•1 Physician

Abbreviations: GUI, graphical user interface; NP, nurse practitioner; PA, physician assistant; RN, registered nurse.


Notes: The table displays the distribution of participants for all focus groups, separated into two rounds. Each round was constructed based on participant availability.


a A total of 23 participants.


Theme 1: Analytics Transparency

There were a wide range of participant comments for defining analytics transparency. Some participants wanted to ensure that the IDSS would capture the heterogeneity of various patient presentations. Validating the IDSS output with clinician assessment findings and cardiorespiratory monitor vital signs were also valued as a key evidential component. Here the participants acknowledged that the IDSS output is only as valuable and accurate as the data that are put into it. Further emphasis focused on how the IDSS creates the risk prediction score. Comments favored the characterization of fluid responsiveness, arterial tone, and cardiac performance, as the clinicians would have some insight about why the IDSS was making a particular risk prediction (avoidance of the “black box” phenomenon). The spectrum of insights included desires to see the highest level of evidence/transparency derived from randomized clinical trial results before any meaningful use would be enacted. [Table 4] includes themes, codes, direct participant quotes, and GUI design changes applied.

Table 4

Focus group themes

Theme

Code

Illustrative quotes

Technical graphical user interface change

1. Analytics transparency

EVIDENCE

RISK_INDEX

BASELINE_RISK

RND1-MD2_FG3: “You know I think there's just a very broad scope of pathology that we see and…potential physiologic derangements…, I would just want to feel very confident that this algorithm could accommodate all of those things…”

RND1-FG2-MD1: “…AI [artificial intelligence] algorithms obviously are a function of the input data…but it's so lovely that we can quickly validate both the heart rate and blood pressure there. So that if the AI [artificial intelligence] seems a little bit off, we can use our own clinical skills to know whether it's accurate or not…”

RND2-FG2-MD1: “I like the fact that the fluid responsiveness, arterial tone, cardiac performance provides information that kind of allows you to at least see and judge what data is being used to…drive this. So, you can have a better idea, instead of…the feared black box…”

RND2-FG3-MD1: “Until I saw it…used in a randomized control trial…to see if it affected patient outcomes in any sort of meaningful way or would just add an additional cost without really adding additional benefit…I would base my feelings off the results of that study.”

RND2-FG3-MD1: “It might be helpful, with explanations…just like a verbal or a text explanation of…how the system is coming to that. Like a little bit more background of how it's coming to that decision…I would just…want a little bit more background information on why it's making certain recommendations.”

Add:

1. Scroll bar and zoom feature

2. 3-day view option

2. Graphical interpretability

LIKE

GUI_INTERP

CHANGES

RND1-RN2_FG3: “I don't love the part where it says action and it tells you the recommendations. I don't like anything that…takes the autonomy away from the practitioner, and the caregiver but…allows people to…just take a recommendation without considering how it might impact that particular patient.”

RND1-FG2-MD2: “…The trends themselves, it's a little bit difficult to follow…I don't know if a red line is what the goal should be or not. In our…discussion of patients, I understand what the…concepts are but they're also terms that we don't typically use…I wouldn't be on rounds and say tell me about what their arterial tone is like.”

RND2-FG2-MD1: “…Layout is nice and I like the fact that you don't have all this immediately jump out at you, and you are given the ability to…click on the information tab to dig deeper.”

RND1-FG1-RN2: “It's a real quick visual snapshot.”

RND1-FG1-RN2: “It's kind of busy.”

RND1-FG2-MD3: “…When we do telemedicine, Tele-ICU [intensive care unit] and we're not physically at the bedside to have that palpable assessment, maybe we don't even have…as close of a working relationship with the staff at the bedside and we are not comfortable with the communication…or you have somebody who is less experienced at the bedside. It's another data point which can be helpful because you're scrounging for data points at a distance, so I can see it being helpful there.”

RND1-FG2-MD2: “I would have thought that the trends that you have in the bottom right would be up on the top left so you get a sense of well how's the patient been doing over time.”

1. Flip left and right sides

2. Move trends on left side of screen and status on right side (we read left to right)

3. Remove checkmarks to make screen less busy

4. Enlarge the forecasting side

5. Make action arrows shorter vs. longer and wider vs. thinner

6. Change status ribbons to yes or no

7. Ribbon color changes

8. Label axes

9. Fluid balance and details of fluids administered added

10. Hover and discover feature for fluids added. Fluid inputs and outputs can also be selected, and boxes can be closed on selection

11. Last updated time added, as well as a “refresh” icon for updating predictions

12. Specific medications can be selected by clinicians

3. Impact on practice

NOVICE DECED

CONFIDENCE

COMMUNICATION

BARRIERS

BENEFITS

PRIORITIZE

CRITIC_THINK

HANDOFF

PRACTICE

ALARM_FATIGUE

ED

OTHER_RISK

PHYS_EXAM

RND1-FG1-RN1: “For younger nurses, I feel like that's…a major part of my job is to…avert a crisis, rather than waiting for the crisis to happen, and I feel like that would be really helpful for new nurses.”

RND2_FG1_RN4: “A new nurse might be hesitant to call and we noticed that in some of our rapid response calls that the new nurse was too afraid to call or afraid of the response that they would get. So, I think it would be a great backup.”

RND1-MD2_FG3: “More novice practitioners might just see the recommendations on the screen and…automatically follow them without necessarily…applying what the recommendation is to the clinical context and…gauging independently whether or not that's appropriate.”

RND2-FG3-RN3: “My concern is and I'm coming at this because I work with a lot of new nurses and developing that critical thinking…if this might not hamper the natural development of critical thinking skills…I personally…would like having this in my back pocket…What am I missing, this is what I've done. Okay I've gotten through the fluids. I've gotten through the vasopressors, what else might I consider. But for a newer nurse they might not even consider that because, well the computer will just tell me.”

RND2-FG2-MD1: “The AI [artificial intelligence] suggestions are only a bonus that it can either help me…feel confident about my impression or…maybe keep me aware if I'm…missing something.”

RND2-FG3-RN1: “I would use it to help communicate with the doctors, interventions that they may not have thought of that we could be doing.”

RND2_FG1_RN1: “So I have a very intimate picture of the patient's clinical status, but when I need to tell the physician and…relay that information, a lot of times they just want to see the highlights. Okay, what is the problem, and this would be a great way to sort of sum up everything that I'm trying to tell them in one visualization So yes, I think that would also be a very useful tool in communicating with the physicians or the practitioners that we're working with.”

RND1-FG2-MD1 “I think it could be…valuable in scenarios which we really have…a paucity of care, I worked at a…small hospital…and overnight, there was no intensivist in house. So, having something there would be better than nothing. But would I use this in the transplant ICU [intensive care unit] where we have 24/7 coverage, with very complicated patients.”

RND1-MD3_FG3: “Seeing the patient throughout the day you know you kind of have a feel for your patient, but the person who's coming on at night, who might be doing a lot of cross coverage…depending on the sign off…might just want to do a quick…I'm just going to look at the forecast real quick if anybody that…looks like the forecast is saying xyz and I didn't receive that in sign out. Maybe I would look at their chart a little bit differently, maybe I would go talk to the bedside nurse and make sure that…something that I didn't hear about, So, that's where I could see when I'm cross covering a lot of patients.”

RND1-RN2_FG3: “I think my main concern would be it…contributes to an over reliance on monitors and you know this kind of phenomenon of treating the monitor rather than treating the patient.”

RND2-FG3-RN2: “I think you'll get pushback from some people on the recommendations and I'm sure that there'll be the comment that the computers are going to try to take our jobs.”

RND1-MD1_FG3: “It took…a good five or so minutes just to explain it. I think is…a specific challenge too and then answering questions. So, if people are having issues just understanding what it's showing and what it means I think that's going to be a barrier to get people to even start to use it, especially on a regular basis.”

RND2_FG1_RN4: “As a nurse I would want to be clear on what my responsibilities are…I have a recommendation, do I need to get a hold of the care provider, how quickly to discuss those recommendations since those aren't independent activities for a nurse.”

RND1-FG2-MD2: Depending on where this is most likely it's going to be I would assume that it's going to be a nurse that's going to see this first. A concern for me because they're at the bedside the most and unless there's someone who's like sitting at a desk watching this for everyone. I would say that they have a lot that they're keeping track of as well, so adding one additional thing it would need to be pretty darn effective at this point, for I think a nurse to say I'm going to add all of this on top of everything that I've got. But I have my biases so I'm just wondering how you fit it in with everything else that we're keeping track of.”

RND2_FG1_RN4: “How many other pieces of equipment are there that are giving me patient information that I would have to process would be a factor…I can think of patients that would have four to five other devices that are at the bedside that they are looking at pressure monitors and performance on…that's just a lot of information to be processing all at the same time.”

RND1-RN2_FG3: “Sometimes we will use extra monitoring, you know extra data points, extra labs, and care doesn't really change as a result…how much it costs is going to be a factor as to who ends up adopting something like this…”

There were no specific changes for this theme

4. Importance of trend synthesis using dynamic patient data

INTERVAL

INTEGRATION

RND2_FG1_RN1: “I do like seeing how the interventions sort of correlate with the vital signs on here. I think it's very helpful to be able to put those pieces together and be able to look at them, side by side, because a lot of times we don't…have that unique view of it.”

RND2-FG2-MD1: “Significantly shorten the amount of time that it takes me to go in and review vital signs, click which ones I want, decide which timeframe within [electronic health record] …I would be very excited to use this just to save time from what is a tedious means of finding and then switching to the MAR [medication administration record].”

RND2_FG1_RN3: “I think my favorite part of this full display is the status portion where it shows the trends in the vitals with the interventions and then what the forecast predicts.”

1. Next to Recommendations, a clock or timestamp to show “last updated”

5. Decisional weight (weighing artificial intelligence output during decision-making)

GUT_FEELING

WEIGHT_RISK

WEIGHT_REC_

EXPLA

RND1-RN2_FG3: “I don't think it would change my decision making very much at least not the status part, maybe the forecast. But I'm still going to make the decisions that I'm going to make, especially if I feel…get the gut feeling that something's not right with my patient. No matter what the monitor says, I'm going to…investigate that.”

RND2-FG2-CRNP1: “Maybe it's a tool that can help you…validate…our own intuition. It's like we don't know what it is that's making us think about this person going down…the tubes to say it crassly but, it could be something to validate…your own feeling about what's going on.”

RND2-FG3-MD1: “The data that your score would only be as good as the data you're putting in. So, it also would depend on the accuracy of the data…As long as…the entire picture is being considered, I think this could be a great tool. But not a replacement for…clinical decision making overall.”

RND1-FG2-MD1: “I think I would probably use it as a consultant at a decision node. So, for example, if there was a patient in whom I wasn't sure what the right thing would be to do. I would probably consult this and, in that, would almost be nice to have.”

There were no specific changes for this theme

6. Display location (concerns for patient/family graphical user interface view)

LOCATION

PT_FAMILY_CONC

FAMILY

RND2-FG3-RN3: “I love the idea of having it in the rooms, because that's where, especially in code situations, that's what everybody is looking at. The monitor. To have it up there where we can all see it.”

RND2-FG2-MD1: “Some hesitancy to have it fully accessible at the bedside with the fact that there's family members…and you know they're already watching every single blip.”

RND2-FG2-PA1: “Families stare at everything…I walk in the room, and they tell me what they've been seeing on a continuous waveform monitor…another eight things for families to look at and wonder what the slope of this or that means…that could be a lot…for them to see at the bedside.”

RND2-FG3-RN2: My only concern about it being in the room is…it's another area of stress for the families. It's making recommendations and you're not doing what the machine is recommending. To cause some mistrust with the health care providers, but I do like the idea of it being able to be in a separate window, so that you can keep it…minimized and pull it up…”

1. Blackout right side of screen in patient room to not overwhelm the family


#

Theme 1 Specific Graphical User Interface Design Changes

Participants requested a scroll bar feature and a 3-day view option to appreciate longer-term trends, including changes in vital signs, risk scores, and interventions applied.


#

Theme 2: Graphical Interpretability

This theme reflects participant desires to influence the technology end-user design “friendliness” element. Comments manifested participant abilities to imagine themselves actively using the GUI in their clinical environments. Many participants were displeased with the recommended “Action” section of the initial GUI prototype that was presented to them. They felt that their professional autonomy would be disrupted if they were asked to follow the IDSS “Actions” and not their own clinical judgement. Other technical design considerations were focused on the “Status” section of the GUI. Participants struggled to understand arrow directionally and their meaning regarding the listed clinical indicators (fluid responsiveness, arterial tone, and cardiac performance). Participants also shared requests for simplicity. The initial GUI prototype was noted to be visually busy. Overall, comments were positive, and the clinical participants mentioned unforeseen benefits such as: (1) using the GUI for telemedicine purposes and (2) appreciation for the communication of all pertinent data in one place. [Fig. 1] represents the initial GUI protype (Version 1) presented to round 1 participants.


#

Theme 2 Specific Graphical User Interface Design Changes

Theme 2 captures a generous amount of design changes requested by participants that improved GUI interpretability ([Figs. 2A] and [B]). For GUI prototype version 2 specifically, much focus was oriented toward the addition of the “Intervention” section. Participants requested a hover-and-discover feature, a “last updated” indicator for medications and fluids administered, as well as the ability for the clinicians to customize medications visualized on the GUI screen. It was evident that our clinical participants were vested, and interests were aligned with the research team in their desire to lessen patient harm by recognizing and intervening on patient instability sooner than later.

Zoom Image
Fig. 2 (A and B) Collapsed and expanded graphical user interface (GUI) prototype (version 2). The figures illustrate major changes that were applied to the GUI prototype after round 1 focus groups. Changes included flipping the right and left sides of the GUI screen so that vital sign trends are seen first on the left side. Further design changes were applied to make the GUI screen less busy (remove check marks from GUI prototype). Overall GUI design changes included color adjustments, arrow enlargement, and improved labeling for x- and y-axis (vital sign and risk trending graphics). Participants also requested incorporating the ability to collapse recommendations and explanations (A) to limit overloading the patient and family with clinical information.

#

Theme 3: Impact on Practice

Both nurses and providers shared concerns about the influential impact of the GUI on novice clinicians. Concepts ranged from the GUI helping and hindering novice practice. For example, positive commentary included the benefit of the GUI helping newer clinicians to recognize patient deterioration before a crisis event. This idea also aligned with gaining the confidence to call for help as the novice would now have an objective measure to report. However, conflicting comments focused on how the GUI would hinder the development of critical thinking skills and how novice clinicians may overly rely on the IDSS output without thinking for themselves. For the more advanced clinician, they felt that the GUI could provide the evidence/confidence needed to back up their clinical intuition when patient deterioration is looming while vital sign changes are often delayed in presentation. The GUI was also referred to as a communication tool for the nurses to use when summarizing and translating their patient impression to medical team members when escalating a concern, as well as during hand-off and shift change. Several barriers were noted. Participants were concerned about the following: (1) AI technologies taking over human-oriented tasks; (2) they observed the length of time that it took to explain how to use the GUI and stated their worries about feasibility in the clinical environment (explanations for use will have to be < 5 minutes); (3) prioritizing their workload (will the GUI create additional work?); and (4) excessive financial cost (will this technology actually provide a benefit to the patient and not just add to existing expenses?).


#

Theme 3 Specific Graphical User Interface Design Changes

There were no specific changes for this theme.


#

Theme 4: Value of Trend Synthesis of Dynamic Patient Data

Integration of vital sign data and medical interventions side-by-side in a real-time view was repeatedly related as an extremely popular key feature of the GUI. Clinicians commented on the perceived time this design detail could potentially save. They imagined having this integrated information available while assessing their patient during a period of instability and how unique this view is. Currently, clinicians collate disparate data in many different locations in the medical record. Although data may be centralized in a single medical record, clinicians still toggle back and forth between different flowsheets to find the data needed to paint an all-inclusive view of their patient. This takes time and attention to detail, and during a patient emergency this level of specificity is not innately feasible. Overall, presenting patient status information (vital signs) alongside completed medical interventions (e.g., medication administration, fluid boluses, etc.) was perceived as a benefit for participant clinical practice. An advantage that could potentially save time, improve workflow, and decrease workload in already demanding clinical environments.


#

Theme 4 Specific Graphical User Interface Design Changes

One change was requested for this theme. Next to the “Recommendation” section, the addition of a clock or timestamp to show the time of the “last updated” IDSS “Recommended” intervention performed.


#

Theme 5: Decisional Weight (Weighing Artificial Intelligence Output during Decision-Making)

Participants were reluctant to view the IDSS predictions communicated through the GUI as an initial alerting system. They imagined their own clinical decision-making and recognition processes would occur first, as if they already knew patient instability was occurring. The IDSS predictions communicated through the GUI would be used as an adjunct to validate their decisions driven by intuition or their “gut-feelings.” Interestingly, in a scenario where the clinician did not know what was driving patient instability or what the appropriate intervention should be, then the IDSS predictions communicated through the GUI would be used as a first pass “consultant” and not validation for decision-making. This theme shares some overlap with theme 2 and 3 (graphical interpretability and impact on practice) where it has been clearly stated by participants that they have concerns about allotting any portion of their autonomous clinical decision-making to an AI technology. Much work will need to focus on key stakeholder buy-in and clinical translation at the bedside for successful and meaningful use to occur.


#

Theme 5 Specific Graphical User Interface Design Changes

There were no specific changes for this theme.


#

Theme 6: Display Location (Usability, Concerns for Patient/Family Graphical User Interface View)

Very thoughtful comments supported the emergence of theme 6. The focus group semistructured moderator guide did not prompt specific comments about patient family member considerations for the GUI display or location. Rather, participant comments evolved together and much time/attention was spent in this space. Participants reflected on how observant families are while supporting their loved one at the bedside. Concerns were voiced about adding yet another piece of technology to the repertoire of bedside health care technologies, especially one that provides medical intervention recommendations and an instability risk score. Clinicians were concerned about how the GUI could encourage family member distrust if the providers or nurses did not follow through with the IDSS recommendations. Although families are savvy, without lengthy nursing or medical training they will be limited in their abilities to dynamically understand why a clinician may or may not follow through with an IDSS prediction/recommendation.


#

Theme 6 Specific Graphical User Interface Design Changes

One design change was requested for this theme. A blackout feature for the right side of screen where the “Recommendation” section is located, to prevent overwhelming families with complex medical information. The blackout feature would simply hide this side of the GUI screen but could be opened by a clinician if they needed to review current GUI information.


#
#

Discussion

Six themes emerged from multidisciplinary focus groups: (1) analytics transparency; (2) graphical interpretability; (3) impact on practice; (4) value of trend synthesis of dynamic patient data; (5) decisional weight (weighing AI output during decision-making); and (6) display location (usability, concerns for patient/family GUI view). These themes represented the concepts that clinicians focused on the most, and although they are distinct, they share some overlap.

The multidisciplinary overlapping areas were: (1) Themes 2, 3, and 5 all captured the notion that clinician autonomy should be preserved when interpreting GUI graphics, performing clinical practice duties, and especially when making decisions about patient instability risk and applying appropriate interventions. (2) Participants shared common opinions about the perceived benefit of vital sign and current interventions integration; this sentiment crossed over between themes 2 and 4. This makes sense as theme 4 was solely dedicated to dynamic patient data integration and theme 2 focused on graphical interpretability. (3) Clinical intuition also appeared in more than one theme. Themes 3 and 5 reflected comments about the value of intuition, both from a clinical practice and decisional weight standpoint.

Findings from our study were encouraging, enabling us to synthesize other investigator results to identify commonalities and differences. Langkjaer et al sought to discover what nurses' experiences were with early warning systems embedded within their EHR.[22] Ultimately, nurses found these tools to be inflexible but useful in recognizing patient deterioration. This study shared findings that were like ours. When stratified, nurses greatly appreciated the shared language allowances with physicians, which could promote enhanced communication. Risk scoring systems were thought to be helpful for novice nurses and not as helpful for experienced nurses. Interestingly, the scoring systems in the Langkjaer et al study were viewed to have improved patient deterioration detection when integrated with the nurses' “gut-feeling” (intuition) that a serious adverse event was about to happen. Similarities were further validated by a study performed by McParland et al, where they specifically focused on the “gut-instinct” and the importance of being able to depart from differential diagnosis decision support system recommendations.[23] These findings underscore the premise and potential of AI in health care: to serve as a tool that can augment, not supplant, the skills and expertise of health care providers.[24] Participants in our study were very much interested in maintaining their clinical decision autonomy, as well.

Other commonalities between our study and others included clinician concerns that there could be an overreliance of the IDSS and decline of independent critical thinking potentially leading to clinicians missing other important patient-oriented information.[6] [22] [23] Additionally, scoring, early warning systems, or graphical displays should allow clinicians to tailor or customize what they are visualizing. This could mean customizing individual patient profiles or desired clinical information viewing.[1] [6] [22] [25] [26] [27]

Low levels of clinician trust in AI performance and a desire for algorithmic transparency have been identified as major barriers to the adoption and effective use of AI tools in health care[1] [28] [29] [30] and these concerns were strongly voiced by our multidisciplinary participants as well. Our findings underscore recommendations by the National Academy of Medicine to incorporate instruction on how to appropriately assess and use AI tools in health care professional training programs and in continuing education for current practitioners.[11] As health care knowledge and patient generated data continues to grow exponentially, health care providers must be equipped with the ability to critically appraise AI tools and then integrate and leverage the insights they provide into management and treatment decisions.[24] [31]

Next, there were findings in recent literature not found in our study. Researchers performed a focus group study to gain insight about physician, advanced practice nurse, and the general public's perceptions of a differential diagnosis decision support system technology intended for primary care use.[23] Clinician comments not heard in our study were oriented toward litigation. Participants voiced concerns about overriding the differential diagnosis decision support system and the risks of future litigation. Although the participants in our study did not mention concerns about litigation, they did make comments about overriding the IDSS and damaging trusting relationships between themselves and their patients' family members.

These comments progressed into a full theme (theme 6) not appreciated in current literature—IDSS clinical practice considerations for patient family members. Some family members spend entire days at the beside and it should be expected that they would pay attention to all of the graphics and alarms that are produced by health care technologies. We also encourage family member involvement in patient care now that we know that patient outcomes improve with family presence at the bedside.[32] [33] Our participants spent thoughtful time considering family members and the implications of seeing clinician interaction with the GUI (i.e., following through with recommended interventions or not and visualizing a constant risk score). Participants were concerned not only about negatively impacting the dynamic medical/nursing team and patient/family relationship, but they were also concerned about generating unnecessary and compounding anxieties that could be derived from a continuously presented instability risk score. These concerns were coupled with apprehensions about the feasibility of clinicians learning how to use the GUI and its impact on existing workloads. This further highlights the fact that the value of information offered by an IDSS can be nullified by the disruption caused by that system. Researchers should consider these findings when they are planning for IDSS implementation. In the future, patients and family members should be invited to participate in focus group studies so that researchers can qualitatively assess how they might interact with these new decisional support systems. Although the IDSS is intended for clinical use, this study importantly shows that patient and family member input needs to be considered before the implementation phase occurs.

Strengths and Limitations

Strengths of this study included the recruitment of multiple professional disciplines to assemble commonalities and differences in viewpoints according to care roles. Our study design expands on current literature as we recruited both nurses and physicians in a single study and stratified the focus groups to solicit disciplinary input singly and combined. We recruited all disciplines who would have interactions with the GUI. These disciplines included nurses, nurse practitioners, physicians, and physician assistants. Critical care medicine providers work together in dynamic teams where each member has something unique to offer, and yet they all share commonalities relative to care delivery. Their multidisciplinary input was absolutely necessary to engage in this work. The critical care setting is also unique in health care delivery, and our approach could be used as a template of what is valued by health care professionals operating in a highly time-critical, high-stakes, and data-rich environment. As an added bonus, we also had computer scientist partnerships that complimented this clinician-driven research.

Weaknesses include limiting viewpoints from a single center. Generalizability is limited in this early design phase, but as the GUI is further evaluated (1:1 usability session and field testing), feedback from clinicians at additional clinical sites, practice areas, and health systems will be targeted. Nevertheless, the information acquired in this early phase was very helpful in making numerous design changes in our prototype GUI even before we begin single-center clinical field testing. By involving end users in design at this preliminary stage, efficacy testing may go more smoothly by improving user-friendliness and eliminating some design barriers in advance.


#
#

Conclusion

Although many IDSS technologies fail, rarely is such failure due to technological flaws. Instead, IDSS technologies mainly fail due to lack of consideration for human interaction elements (trust, usability, and organizational/clinical workflow in IDSS design and implementation processes). We found that engaging multidisciplinary clinicians early in iterative IDSS development was helpful for identifying diverse insights needed to support human-centered design for all eventual users, especially factors associated with AI acceptance. These findings are critically important for helping researchers design tools that will be accepted by the multidisciplinary clinical workforce to optimally leverage potential AI benefits.


#

Clinical Relevance Statement

Early development clinical opinion highlighted that health care IDSS technologies need to be transparent on how they work, easy to read and interpret, facilitate rather than disrupt workflow, and that decisional support components need to be used as a supplement and not replace human decision-making. Every clinical environment is nuanced; leveraging frontline clinical input should be top priority for leadership teams who drive institutional change. Utilizing robust qualitative focus group study methods from all disciplinary end users made it possible for researchers to discover these clinically relevant details, which will be applied to future GUI display modifications and implementation.


#

Multiple-Choice Questions

  1. Some themes shared overlap in this study, of note were multidisciplinary clinician concerns for loss of their professional autonomy if asked to implement and use a bedside IDSS. What specific request was made regarding changes to the GUI to remedy this concern?

    • Change the “Action” section

    • Change the “Status” section

    • Change the “Forecast” section

    • Change the “Vitals” section

    Correct Answer: The correct answer is option a. Many participants were displeased with the recommended “Action” section of the initial GUI prototype that was presented to them. They felt that their professional autonomy would be disrupted if they were asked to follow the IDSS “Actions” and not their own clinical judgement.

  2. What are two concrete concerns mentioned by the clinician participants related to family presence at the bedside where an IDSS would be in place?

    • Family members might press buttons on the GUI and family member potential to inadvertently turn off the GUI screen

    • Clinicians follow through with recommended interventions or not and patients/families visualizing a constant risk score

    • Alarm fatigue and decreased anxiety

    • Family members might video record the GUI screen and consider litigation

    Correct Answer: The correct answer is option b. Our participants spent thoughtful time considering family members and the implications of seeing clinician interaction with the GUI (i.e., following through with recommended interventions or not and visualizing a constant risk score). Participants were concerned not only about negatively impacting the dynamic medical/nursing team and patient/family relationship, but they were also concerned about generating unnecessary and compounding anxieties that could be derived from a continuously presented instability risk score.


#
#

Conflict of Interest

None declared.

Acknowledgments

We would like to thank the clinician participants for their time and commitment to improving patient safety and outcomes. Their time away from patient care is so very appreciated. Without their contributions, useful bedside technologies would not be possible to design and deploy in the clinical environment.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects and was reviewed by the Institutional Review Board.


  • References

  • 1 Lim HC, Austin JA, van der Vegt AH. et al. Toward a learning health care system: a systematic review and evidence-based conceptual framework for implementation of clinical analytics in a digital hospital. Appl Clin Inform 2022; 13 (02) 339-354
  • 2 Helman SM, Herrup EA, Christopher AB, Al-Zaiti SS. The role of machine learning applications in diagnosing and assessing critical and non-critical CHD: a scoping review. Cardiol Young 2021; 31 (11) 1770-1780
  • 3 Sullivan C, Staib A, McNeil K, Rosengren D, Johnson I. Queensland digital health clinical charter: a clinical consensus statement on priorities for digital health in hospitals. Aust Health Rev 2020; 44 (05) 661-665
  • 4 Patel VL, Shortliffe EH, Stefanelli M. et al. The coming of age of artificial intelligence in medicine. Artif Intell Med 2009; 46 (01) 5-17
  • 5 Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA 2018; 320 (21) 2199-2200
  • 6 Calzoni L, Clermont G, Cooper GF, Visweswaran S, Hochheiser H. Graphical presentations of clinical data in a learning electronic medical record. Appl Clin Inform 2020; 11 (04) 680-691
  • 7 Cannesson M, Hofer I, Rinehart J. et al. Machine learning of physiological waveforms and electronic health record data to predict, diagnose and treat haemodynamic instability in surgical patients: protocol for a retrospective study. BMJ Open 2019; 9 (12) e031988
  • 8 Helman S, Terry MA, Pellathy T. et al. Engaging clinicians early during the development of a graphical user display of an intelligent alerting system at the bedside. Int J Med Inform 2022; 159: 104643
  • 9 Porter A, Dale J, Foster T, Logan P, Wells B, Snooks H. Implementation and use of computerised clinical decision support (CCDS) in emergency pre-hospital care: a qualitative study of paramedic views and experience using strong structuration theory. Implement Sci 2018; 13 (01) 91
  • 10 Fareed N, Swoboda CM, Chen S, Potter E, Wu DTY, Sieck CJUS. U.S. COVID-19 state government public dashboards: an expert review. Appl Clin Inform 2021; 12 (02) 208-221
  • 11 Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication Washington, DC: National Academy of Medicine; 2019: 154
  • 12 Bersani K, Fuller TE, Garabedian P. et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01) 34-45
  • 13 Merkel MJ, Edwards R, Ness J. et al. Statewide real-time tracking of beds and ventilators during coronavirus disease 2019 and beyond. Crit Care Explor 2020; 2 (06) e0142
  • 14 Chen L, Ogundele O, Clermont G, Hravnak M, Pinsky MR, Dubrawski AW. Dynamic and personalized risk forecast in step-down units. implications for monitoring paradigms. Ann Am Thorac Soc 2017; 14 (03) 384-391
  • 15 Yoon JH, Mu L, Chen L. et al. Predicting tachycardia as a surrogate for instability in the intensive care unit. J Clin Monit Comput 2019; 33 (06) 973-985
  • 16 Yoon JH, Jeanselme V, Dubrawski A, Hravnak M, Pinsky MR, Clermont G. Prediction of hypotension events with physiologic vital sign signatures in the intensive care unit. Crit Care 2020; 24 (01) 661
  • 17 Barnett A, Winning M, Canaris S, Cleary M, Staib A, Sullivan C. Digital transformation of hospital quality and safety: real-time data for real-time action. Aust Health Rev 2019; 43 (06) 656-661
  • 18 Limousin P, Azzabi R, Berge L, Dubois H, Truptil S, Gall LL. How to build dashboards for collecting and sharing relevant informations to the strategic level of crisis management: an industrial use case. 2019 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM). 2019:1–8
  • 19 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
  • 20 Kyngäs H. Inductive Content Analysis. The Application of Content Analysis in Nursing Science Research. Springer; 2020: 13-21
  • 21 Kurtzman G, Dine J, Epstein A. et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med 2017; 12 (09) 743-746
  • 22 Langkjaer CS, Bove DG, Nielsen PB, Iversen KK, Bestle MH, Bunkenborg G. Nurses' experiences and perceptions of two early warning score systems to identify patient deterioration-a focus group study. Nurs Open 2021; 8 (04) 1788-1796
  • 23 McParland CR, Cooper MA, Johnston B. Differential diagnosis decision support systems in primary and out-of-hours care: a qualitative analysis of the needs of key stakeholders in Scotland. J Prim Care Community Health 2019; 10: 2150132719829315
  • 24 Lomis KP, Jeffries A, Palatta M. , et al. Artificial Intelligence for Health Professions Educators. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC; 2021
  • 25 Fletcher GS, Aaronson BA, White AA, Julka R. Effect of a real-time electronic dashboard on a rapid response system. J Med Syst 2017; 42 (01) 5
  • 26 Schall Jr MC, Cullen L, Pennathur P, Chen H, Burrell K, Matthews G. Usability evaluation and implementation of a health information technology dashboard of evidence-based quality indicators. Comput Inform Nurs 2017; 35 (06) 281-288
  • 27 Franklin A, Gantela S, Shifarraw S. et al. Dashboard visualizations: supporting real-time throughput decision-making. J Biomed Inform 2017; 71: 211-221
  • 28 Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020; 323 (06) 509-510
  • 29 Mlaver E, Schnipper JL, Boxer RB. et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12) 676-685
  • 30 Paulson SS, Dummett BA, Green J, Scruth E, Reyes V, Escobar GJ. What do we do after the pilot is done? Implementation of a hospital early warning system at scale. Jt Comm J Qual Patient Saf 2020; 46 (04) 207-216
  • 31 Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics 2019; 21 (02) E146-E152
  • 32 Strathdee SA, Hellyar M, Montesa C, Davidson JE. The power of family engagement in rounds: an exemplar with global outcomes. Crit Care Nurse 2019; 39 (05) 14-20
  • 33 Goldfarb MJ, Bibas L, Bartlett V, Jones H, Khan N. Outcomes of patient-and family-centered care interventions in the ICU: a systematic review and meta-analysis. Crit Care Med 2017; 45 (10) 1751-1761

Address for correspondence

Stephanie Helman, PhD(c), RN, CCRN-K, CCNS
Department of Acute and Tertiary Care Nursing, University of Pittsburgh
3500 Victoria Street, Pittsburgh, PA 15213
United States   

Publikationsverlauf

Eingereicht: 15. März 2023

Angenommen: 26. Juli 2023

Artikel online veröffentlicht:
04. Oktober 2023

© 2023. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Lim HC, Austin JA, van der Vegt AH. et al. Toward a learning health care system: a systematic review and evidence-based conceptual framework for implementation of clinical analytics in a digital hospital. Appl Clin Inform 2022; 13 (02) 339-354
  • 2 Helman SM, Herrup EA, Christopher AB, Al-Zaiti SS. The role of machine learning applications in diagnosing and assessing critical and non-critical CHD: a scoping review. Cardiol Young 2021; 31 (11) 1770-1780
  • 3 Sullivan C, Staib A, McNeil K, Rosengren D, Johnson I. Queensland digital health clinical charter: a clinical consensus statement on priorities for digital health in hospitals. Aust Health Rev 2020; 44 (05) 661-665
  • 4 Patel VL, Shortliffe EH, Stefanelli M. et al. The coming of age of artificial intelligence in medicine. Artif Intell Med 2009; 46 (01) 5-17
  • 5 Shortliffe EH, Sepúlveda MJ. Clinical decision support in the era of artificial intelligence. JAMA 2018; 320 (21) 2199-2200
  • 6 Calzoni L, Clermont G, Cooper GF, Visweswaran S, Hochheiser H. Graphical presentations of clinical data in a learning electronic medical record. Appl Clin Inform 2020; 11 (04) 680-691
  • 7 Cannesson M, Hofer I, Rinehart J. et al. Machine learning of physiological waveforms and electronic health record data to predict, diagnose and treat haemodynamic instability in surgical patients: protocol for a retrospective study. BMJ Open 2019; 9 (12) e031988
  • 8 Helman S, Terry MA, Pellathy T. et al. Engaging clinicians early during the development of a graphical user display of an intelligent alerting system at the bedside. Int J Med Inform 2022; 159: 104643
  • 9 Porter A, Dale J, Foster T, Logan P, Wells B, Snooks H. Implementation and use of computerised clinical decision support (CCDS) in emergency pre-hospital care: a qualitative study of paramedic views and experience using strong structuration theory. Implement Sci 2018; 13 (01) 91
  • 10 Fareed N, Swoboda CM, Chen S, Potter E, Wu DTY, Sieck CJUS. U.S. COVID-19 state government public dashboards: an expert review. Appl Clin Inform 2021; 12 (02) 208-221
  • 11 Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication Washington, DC: National Academy of Medicine; 2019: 154
  • 12 Bersani K, Fuller TE, Garabedian P. et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01) 34-45
  • 13 Merkel MJ, Edwards R, Ness J. et al. Statewide real-time tracking of beds and ventilators during coronavirus disease 2019 and beyond. Crit Care Explor 2020; 2 (06) e0142
  • 14 Chen L, Ogundele O, Clermont G, Hravnak M, Pinsky MR, Dubrawski AW. Dynamic and personalized risk forecast in step-down units. implications for monitoring paradigms. Ann Am Thorac Soc 2017; 14 (03) 384-391
  • 15 Yoon JH, Mu L, Chen L. et al. Predicting tachycardia as a surrogate for instability in the intensive care unit. J Clin Monit Comput 2019; 33 (06) 973-985
  • 16 Yoon JH, Jeanselme V, Dubrawski A, Hravnak M, Pinsky MR, Clermont G. Prediction of hypotension events with physiologic vital sign signatures in the intensive care unit. Crit Care 2020; 24 (01) 661
  • 17 Barnett A, Winning M, Canaris S, Cleary M, Staib A, Sullivan C. Digital transformation of hospital quality and safety: real-time data for real-time action. Aust Health Rev 2019; 43 (06) 656-661
  • 18 Limousin P, Azzabi R, Berge L, Dubois H, Truptil S, Gall LL. How to build dashboards for collecting and sharing relevant informations to the strategic level of crisis management: an industrial use case. 2019 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM). 2019:1–8
  • 19 Dowding D, Randell R, Gardner P. et al. Dashboards for improving patient care: review of the literature. Int J Med Inform 2015; 84 (02) 87-100
  • 20 Kyngäs H. Inductive Content Analysis. The Application of Content Analysis in Nursing Science Research. Springer; 2020: 13-21
  • 21 Kurtzman G, Dine J, Epstein A. et al. Internal medicine resident engagement with a laboratory utilization dashboard: mixed methods study. J Hosp Med 2017; 12 (09) 743-746
  • 22 Langkjaer CS, Bove DG, Nielsen PB, Iversen KK, Bestle MH, Bunkenborg G. Nurses' experiences and perceptions of two early warning score systems to identify patient deterioration-a focus group study. Nurs Open 2021; 8 (04) 1788-1796
  • 23 McParland CR, Cooper MA, Johnston B. Differential diagnosis decision support systems in primary and out-of-hours care: a qualitative analysis of the needs of key stakeholders in Scotland. J Prim Care Community Health 2019; 10: 2150132719829315
  • 24 Lomis KP, Jeffries A, Palatta M. , et al. Artificial Intelligence for Health Professions Educators. NAM Perspectives. Discussion Paper, National Academy of Medicine, Washington, DC; 2021
  • 25 Fletcher GS, Aaronson BA, White AA, Julka R. Effect of a real-time electronic dashboard on a rapid response system. J Med Syst 2017; 42 (01) 5
  • 26 Schall Jr MC, Cullen L, Pennathur P, Chen H, Burrell K, Matthews G. Usability evaluation and implementation of a health information technology dashboard of evidence-based quality indicators. Comput Inform Nurs 2017; 35 (06) 281-288
  • 27 Franklin A, Gantela S, Shifarraw S. et al. Dashboard visualizations: supporting real-time throughput decision-making. J Biomed Inform 2017; 71: 211-221
  • 28 Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the National Academy of Medicine. JAMA 2020; 323 (06) 509-510
  • 29 Mlaver E, Schnipper JL, Boxer RB. et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12) 676-685
  • 30 Paulson SS, Dummett BA, Green J, Scruth E, Reyes V, Escobar GJ. What do we do after the pilot is done? Implementation of a hospital early warning system at scale. Jt Comm J Qual Patient Saf 2020; 46 (04) 207-216
  • 31 Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics 2019; 21 (02) E146-E152
  • 32 Strathdee SA, Hellyar M, Montesa C, Davidson JE. The power of family engagement in rounds: an exemplar with global outcomes. Crit Care Nurse 2019; 39 (05) 14-20
  • 33 Goldfarb MJ, Bibas L, Bartlett V, Jones H, Khan N. Outcomes of patient-and family-centered care interventions in the ICU: a systematic review and meta-analysis. Crit Care Med 2017; 45 (10) 1751-1761

Zoom Image
Fig. 1 Initial graphical user interface (version 1) prototype presented to round 1 participants. The “Forecast” risk score displayed in the right side of figure represents an individual's current relative risk for instability, and the dotted line shows their future risk trajectory projected for the next half hour. Under the “Status” heading on the left side of the screen, the colorful ribbons are meant to provide clinicians with awareness of physiological states, which may be contributing to instability. Green reflects no instability risk, yellow indicates mild instability risk (patient evaluation warranted, but the risk is not urgent), and red signifies risk for serious instability requiring urgent intervention and perhaps escalation of care. Finally, the bottom area of the screen recommends “Actions” based upon the patient's current and projected state and is revised at 5-minute intervals.
Zoom Image
Fig. 2 (A and B) Collapsed and expanded graphical user interface (GUI) prototype (version 2). The figures illustrate major changes that were applied to the GUI prototype after round 1 focus groups. Changes included flipping the right and left sides of the GUI screen so that vital sign trends are seen first on the left side. Further design changes were applied to make the GUI screen less busy (remove check marks from GUI prototype). Overall GUI design changes included color adjustments, arrow enlargement, and improved labeling for x- and y-axis (vital sign and risk trending graphics). Participants also requested incorporating the ability to collapse recommendations and explanations (A) to limit overloading the patient and family with clinical information.