Subscribe to RSS
DOI: 10.1055/a-2088-2893
Data Science Implementation Trends in Nursing Practice: A Review of the 2021 Literature
- Abstract
- Background and Significance
- Objectives
- Methods
- Results
- Discussion
- Conclusion
- Clinical Relevance Statement
- Multiple Choice Questions
- References
Abstract
Objectives The goal of this work was to provide a review of the implementation of data science-driven applications focused on structural or outcome-related nurse-sensitive indicators in the literature in 2021. By conducting this review, we aim to inform readers of trends in the nursing indicators being addressed, the patient populations and settings of focus, and lessons and challenges identified during the implementation of these tools.
Methods We conducted a rigorous descriptive review of the literature to identify relevant research published in 2021. We extracted data on model development, implementation-related strategies and measures, lessons learned, and challenges and stakeholder involvement. We also assessed whether reports of data science application implementations currently follow the guidelines of the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by AI (DECIDE-AI) framework.
Results Of 4,943 articles found in PubMed (NLM) and CINAHL (EBSCOhost), 11 were included in the final review and data extraction. Systems leveraging data science were developed for adult patient populations and were primarily deployed in hospital settings. The clinical domains targeted included mortality/deterioration, utilization/resource allocation, and hospital-acquired infections/COVID-19. The composition of development teams and types of stakeholders involved varied. Research teams more frequently reported on implementation methods than implementation results. Most studies provided lessons learned that could help inform future implementations of data science systems in health care.
Conclusion In 2021, very few studies report on the implementation of data science-driven applications focused on structural- or outcome-related nurse-sensitive indicators. This gap in the sharing of implementation strategies needs to be addressed in order for these systems to be successfully adopted in health care settings.
#
Keywords
data science - machine learning - nursing - prediction - implementation - deployment - pilotBackground and Significance
Data science has significant potential to influence health care delivery and patient outcomes. Advances in data availability, computing power, and data science methods continue to occur. In addition to more traditionally used data, such as electronic health record (EHR) and health registry data, nontraditional data and sources are now being incorporated into data science datasets,[1] such as social determinants of health,[2] wearable technology, and the Internet of Things,[3] creating even more potential for innovation. Every sector of health care now has the potential to use data science for scientific discovery and clinical practice improvement with hopes that data science can help improve some of the biggest health problems, such as health disparities,[4] opioid use disorder,[5] and poor birth outcomes,[6] to name just a few.
Despite these advances, the benefits of data science have not been as pronounced as many would think. Barriers to realizing the benefits of data science in the health care sector can be attributed to inadequate reporting, lack of stakeholder involvement, and the need to identify best practices regarding how new data science applications, such as clinical decision support (CDS),[7] are implemented.[8] In a systematic review, Yang et al[9] found that of the hundreds of clinical prediction models developed using EHR data in the past decade, very few accompanying publications provided sufficient reporting detail to be reproducible or externally validated. A review by Schwartz et al[10] revealed that less than one-third of CDS systems reported involvement from clinical experts. Discussion and recommendations for successful implementation of data science applications are gaining traction[11] [12] but established best practices are still evolving. Lee et al[7] conducted a systematic review of predictive models embedded in EHR systems and described common implementation challenges such as alert fatigue and institutional investment in adequate training. To our knowledge, a review that more broadly examines the implementation of data science applications in more recent literature has not been conducted.
Just as data science applications can be found in almost all areas of health care, data science is also frequently used and reported in the nursing profession. Nurses are ubiquitous to health care across all specialties and clinical areas, from inpatient to community-based, bedside to provider, and pediatrics to geriatrics. Given that nurses have such varied roles, the influences of data science on nursing can be widespread and could have implications regarding how nurses make decisions, collaborate with other professions, and provide care to their patients. Therefore, publications at the intersection of data science and nursing could represent a breadth and depth of knowledge related to implementing new data science applications within health care delivery.[13]
The idea of conducting a “nursing data science year in review” was conceived by the Center for Nursing Informatics' Data Science Workgroup[14] where we originally sought to help readers remain abreast of the latest research in which data science was used to address selected patient and health care system outcomes. In our earlier reviews, we described the data science models in projects that focused on particular clinical problems such as patient falls, nosocomial infections, and pressure injuries.[15] [16] We noted that the variables included in most statistical models were similar (i.e., demographics, diagnoses, laboratories), and the major data science models (i.e., supervised machine learning) were also a commonality across the spectrum of clinical problems we considered. What remained unclear to us at the conclusion of these reviews was the extent to which the data science models that were developed had been used in actual episodes of care or incorporated into health information systems and CDS. Therefore, to address the important issue of using data science to impact practice, now, in our third year of performing a literature review, we decided to narrow our focus to conduct a descriptive review of data science literature to explore how data science was being employed to guide actual practice approaches and to enhance clinical applications and CDS in health care systems. Rather than simply identifying nurse-relevant projects and models, we sought to explore applications that had been implemented in the year 2021. By focusing on success stories and challenges of data science utilization in clinical implementations, we anticipated that readers could benefit from learning which strategies had been effectively used and that this would ultimately increase the use of data science, foster better acceptance of these tools among clinicians, and improve targeted patient outcomes.
#
Objectives
The goal of this work was to provide a review of the implementation of data science applications focused on structural or outcome-related nurse-sensitive indicators in the literature in 2021. While data science techniques can encompass tasks spanning prediction, inference, clustering, and text generation, among others, we focus our review on prediction tasks because our clinical experience suggests that prediction tasks are most likely to be desired during nursing care. By conducting this review, we aim to inform readers of trends in the nursing indicators being addressed, the patient populations and settings of focus, and lessons and challenges identified during the implementation of these tools.
#
Methods
We conducted a rigorous descriptive literature review to find prediction-focused data science papers that included topics relevant to care delivery by nurses and that were deployed or implemented and evaluated in a real-world setting, were prospectively developed, and are of interest to nurses and other interdisciplinary health care leaders. Descriptive reviews have been characterized by utilizing a systematic approach to identify included literature, focusing on specific areas of interest (in our case, the literature published in 2021 reporting use of models created using data science in implemented projects); extraction of variables of interest to allow identification of patterns; a focus on seeking representative works; and often, some degree of quantification of relevant study characteristics.[17] In our review of implemented projects, we have elected to emphasize a representative, rather than exhaustive, approach for inclusion of relevant literature.
Search terms were devised by the study team, which included a medical research librarian, and focused on four groups of terms: prediction, data science, implementation, and nursing-sensitive indicators (see full list in [Supplementary Appendix 1], available in the online version). The search was conducted in PubMed (NLM) and CINAHL (EBSCOhost) in February 2022 and then again in April 2022 to capture any missed publications due to delays in indexing. The search used Boolean logic and was adapted to the formatting and subject headings of each database. Filters included English language, humans, and a publication date of 2021. The publication date limit included all studies with an online first date of 2021 even if the print publication date was 2022. The full database search strategies are in [Supplementary Appendix 1] (available in the online version).
We developed inclusion and exclusion criteria via group consensus with the intention of providing a representative sample of data science publications rather than an exhaustive review of all publications. We included publications that were either primary studies or systematic reviews/meta-analyses. Studies were required to use prediction-focused data science methods, which we defined as supervised machine learning techniques (e.g., neural networks, tree-based methods, ensemble methods). The list of Supervised Learning methods on Scikit-learn.org was used as a reference of techniques.[18] The data science tool also had to be incorporated into a real-world clinical deployment, such as a pilot or a full-scale implementation to be included. Nursing-relevant outcome terms were based on the nursing-sensitive indicators mapped out in Heslop et al's[19] concept analysis with an emphasis on select structural (i.e., work schedule, nurse staffing ratios) and outcome-related (i.e., pressure injuries, falls, health care acquired infections, length of stay, mortality) nurse-sensitive indicators that have the potential to be used as either predictors or targets of data science applications.[19] Exclusions included retrospective studies or studies that only used linear models (i.e., regression) because there was a desire to focus on more “black box” modeling methods that are expected to demonstrate unique implementation challenges related to interpretability by end users. Studies that only used basic statistical tests (e.g., t-tests), evaluated psychometric properties, examined association between a variable and the outcome or were written as opinion pieces were also not included.
Abstract and full-text screening was done using Covidence systematic review software.[20] A team of 20 researchers reviewed abstracts and full texts independently so that each abstract and full text was reviewed by two researchers to limit bias. Disagreements between team member findings were reviewed and resolved by group consensus during study team meetings or by the first or senior author until consensus was met on all exclusions and inclusions. Two independent reviewers did data extraction for each included study and if extracted data did not match between two reviewers, those reviewers discussed their findings until consensus was reached and then submitted the final data to be used in the synthesis. We extracted information related to each study's model development strategies, implementation practices, stakeholder involvement and associated process, and outcome measures. Key lessons learned as reported by the authors (e.g., human factors, error handling, modifications of the artificial intelligence (AI) system during implementation) were also summarized. See [Supplementary Appendix 2] (available in the online version) for a full summary of data abstracted.
We also used the themes of the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by AI (DECIDE-AI) framework to guide data extraction [Supplementary Appendix 3] (available in the online version). This framework aims to provide an actionable checklist of minimal reporting items for early-stage clinical evaluations of AI systems.[21] Since this framework was just published in April of 2022, research teams who published in 2021 would not have been aware of this framework. However, we were curious which, if any, elements of the checklist were already being reported in the literature and where gaps in reporting might exist relative to the framework recommendations. We focused our assessment and extraction on the following implementation checklist items of the DECIDE-AI framework: Implementation Methods (e.g., how the AI system provided information to users, patient involvement), and Implementation Results (e.g., usability, patient outcomes). We extracted information for each checklist item. A complete summary of data extracted was compiled into a supplementary data file in [Supplementary Appendix 2] (available in the online version)
#
Results
Search Results and Screening
Based on our search strategy, we identified 4,943 articles with 294 duplicates that were removed. Title and abstract screening based on established eligibility criteria was performed on 4,649 abstracts, resulting in 321 articles that went through full-text review. A total of 11 articles were included for data extraction and review. A flowchart of the search and selection process, along with reasons for removal during full-text review, is presented in [Fig. 1]. A summary of the 11 studies included in data extraction and analysis is provided in [Table 1].


Author |
Location |
Clinical outcomes |
Setting |
---|---|---|---|
Altieri Dunn et al[22] |
The United States |
Mortality |
Single health care system |
Bertsimas et al[31] |
The United States |
1. Mortality 2. Infection risk Both for COVID-19 |
Multiple settings |
Fenn et al[23] |
The United States |
Admission from ED |
Single health care system |
Han et al[24] |
The United States |
Significant weight loss |
Radiotherapy clinic |
Jauk et al[25] |
Austria |
Inpatient delirium occurrences |
Single hospital |
Møller et al[28] |
Denmark |
HA-UTI |
Single inpatient department |
Murphree et al[29] |
The United States |
Need for palliative care consult |
Single hospital |
Ng and Tan[26] |
Singapore |
Readmissions |
All national public hospitals |
Strömblad et al[27] |
The United States |
Surgical case duration |
Two surgical services |
Wu et al[32] |
Taiwan |
Acute exacerbations of chronic obstructive pulmonary disease (COPD) |
Single hospital |
Wu et al[30] |
Singapore |
Readmission |
Single hospital |
Abbreviations: ED, emergency department; HA-UTI, healthcare-associated urinary tract infection.
#
Model Development
The machine learning models described incorporated a variety of data inputs from patients in multiple different settings, leveraged several distinct algorithm types, and varied in the composition of stakeholder involvement. AI System Development of the predictive models included data from EHRs and/or administrative databases[22] [23] [24] [25] [26] [27] [28] [29] [30] [31] as well as prospectively collected data such as physiological data from wearables[32] and imaging data.[24] Vital signs, laboratory values, diagnoses, and clinical notes were common sources of input data from the EHR. Other types of input data included activities of daily living collected from nursing flowsheets[30] and nursing assessments.[25] Most models leveraged data from hospitalized adult patients; however, several models included data from outpatient settings[23] [24] and even from the community.[31] Algorithms comprised a variety of contemporary machine learning algorithms including, but not limited to, decision tree-based methods, neural networks, and ensemble methods. While five studies did not report the expertise of personnel involved in model development, other papers reported clinicians (e.g., nurses, physicians, occupational therapists, case managers),[23] [24] [27] [32] data scientists,[26] statisticians,[24] informaticians,[25] policymakers,[26] and unspecified hospital staff[26] [32] as being involved in model development in some capacity. The nursing-relevant outcomes targeted by the data science systems fell within three clinical domains: mortality/deterioration,[22] [24] [25] [32] health care utilization/resource allocation,[23] [26] [27] [29] [30] and hospital-acquired infections/COVID-19.[28] [31]
#
Implementation Strategies and Measures
Most studies used a cohort design (n = 10) for the implementation study while one study used a randomized control trial design.[27] All studies implemented systems designed for adult patient populations and most studies were conducted in inpatient hospital settings with the exception of a community setting,[31] clinic,[24] and surgical areas.[27]
Implementation process and outcome measures were reported in about half of the studies (n = 6). Two studies reported on the influence of the implementation on patient care.[7] [11] Wu et al[30] reported a 1.6% reduction in readmission rates and translated that into inpatient bed days of 3,200 annually. Strömblad et al[27] reported a mean reduction in wait time for surgical patients of 33 minutes. Two other studies reported on the incorporation of risk scores into clinical decision-making. Murphree et al[29] reported that 43% of patients identified as high risk were accepted to the palliative care service using the model. Han et al[24] reported that physicians changed their prediction in 4 of 37 cases after being presented with the model prediction and compared the clinical decision support system (CDSS) versus physician accuracy, specificity, positive predictive value, and negative predictive value, and sensitivity (CDSS outperformed physicians on all but the latter measure). Altieri Dunn et al[22] reported on monthly compliance rates of their system, amount of missing data, and the time it takes for nurse users to enter data features into the system to generate a prediction. Jauk et al[25] examined the user acceptance of their system via both qualitative and quantitative methods. Despite an overall positive rating for usefulness and ease of use, system use during the pilot conducted by Jauk at al[25] was reported as low in that only 28% of users reported using the application.
#
Stakeholder Involvement in Implementation
Project team composition varied among studies. Of those that reported team membership, roles reported included physicians,[23] [24] [27] nurses,[26] [27] informatics/information technology (IT),[25] [26] [29] data scientists, and policy makers.[26] Some studies also used more general terms to describe team membership such as hospital stakeholders, hospital staff, or subject matter experts so the roles of some team members could not be specifically identified.[23] [26] [30] In some studies, there was no explicit report of project team membership.[22] [28] [31] [32] Due to the use of general term or omission of team membership, it is difficult to ascertain whether the target users of the systems were included in the implementation teams. For example, two studies[26] [27] provided evidence that nurses were incorporated into the implementation team either by explicitly reporting on team membership or inferred from manuscript authorship. However, nurses were reported as being among target users of the associated decision support systems in four studies.[22] [23] [26] [30]
#
Reports of Implementation-Related Lessons Learned and Challenges
Our team also extracted data concerning lessons learned specific to model implementation and found the main themes involved tool accessibility, stakeholder engagement, and transparent design elements. Altieri Dunn et al[22] reported that a benefit of a manual data entry format to generate their tool's risk score is that it is accessible to all external referring health institutions as long as they have an internet connection. Fenn et al[23] reported that operational users were consulted early in the dashboard design process and that the team utilized a model fact sheet to support users in their interpretation of the system output; however, they did not provide data regarding how these strategies influenced the success of the implementation. Murphree et al[29] cited the tight integration of clinical, research, informatics, and IT teams as well as overall project buy-in to ensure success in translating the model into practice. Ng and Tan[26] identified change management, engagement by a multidisciplinary team, process re-engineering, and investment in workforce skills development to achieve project goals as all key to implementation success. Jauk et al[25] attributed the display of relevant features as enhancing the interpretability and understandability of the model output. Similarly, Wu et al[30] identified the visibility of each component of a readmission risk score and user ability to tailor interventions based on risk factors as a strength to support the system's use.
Authors also reported on implementation-related challenges related to ease of use of the tool, user knowledge deficits, and data collection challenges. Altieri Dunn et al[22] reported that manual data entry to generate a risk score costs clinician time and relies on user compliance. Han et al[24] discussed how lack of physician knowledge of these systems can impact uptake into decision making but acknowledged that the study's small sample size (n = 37) limited their ability to determine if the system helped improve provider predictions. Jauk et al[25] described a need for more promotion and training to improve uptake. Wu et al[32] described the implementation challenges of environmental data collected in the home. Strömblad et al[27] described the variability in implementation success as being dependent upon the average duration of surgeries in a particular service (a timeframe metric for their model).
#
Reporting Alignment with the DECIDE-AI Framework
Themes from the DECIDE-AI checklist were also used to report the results of the review. As summarized in [Table 2], although DECIDE-AI was not yet published, items on the DECIDE-AI implementation methods checklist were reported in most studies, including descriptions of the setting in which AI was evaluated, the clinical workflows/care pathways, and the timing of use. However, how the final supported decision was reached, and by whom, was only completely described in five studies. As summarized in [Table 3], reporting of items on the implementation results checklist was generally less frequent. While nine studies included descriptions of user exposures to an AI system, only five studies included descriptions of the significant changes to the clinical workflow or care pathway caused by the AI system; only four contained data about the number of instances of AI use; and only two reported users' adherence to the intended implementation.
DECIDE-AI implementation methods checklist items |
||||
---|---|---|---|---|
Setting in which AI was evaluated |
Clinical workflows/care pathways |
Timing of use |
How the final supported decision was reached and by whom |
|
Altieri Dunn et al[22] |
|
|
|
By whom but not how |
Bertsimas et al[31] |
|
|
_ |
_ |
Fenn et al[23] |
|
|
|
_ |
Han et al[24] |
|
|
|
|
Jauk et al[25] |
|
|
|
_ |
Møller et al[28] |
|
_ |
|
_ |
Murphree et al[29] |
|
|
|
|
Ng and Tan[26] |
|
|
|
_ |
Strömblad et al[27] |
|
|
|
|
Wu CT et al[32] |
|
|
|
|
Wu CX et al[30] |
|
|
|
|
No. of studies reporting item |
11 |
10 |
10 |
5 |
Abbreviation: AI, artificial intelligence.
DECIDE-AI implementation results checklist items |
||||
---|---|---|---|---|
User exposure to AI system |
Number of instances of AI use |
Users' adherence to intended implementation |
Significant changes to the clinical workflow or care pathway caused by the AI system |
|
Altieri Dunn et al[22] |
|
|
|
|
Bertsimas et al[31] |
_ |
_ |
_ |
_ |
Fenn et al[23] |
|
_ |
_ |
_ |
Han et al[24] |
|
|
– |
|
Jauk et al[25] |
|
_ |
|
_ |
Møller et al[28] |
|
_ |
_ |
_ |
Murphree et al[29] |
|
_ |
_ |
_ |
Ng and Tan[26] |
_ |
|
_ |
|
Strömblad et al[27] |
|
|
_ |
|
Wu CT et al[32] |
|
_ |
_ |
_ |
Wu CX et al[30] |
|
_ |
_ |
|
No. studies reporting item |
9 |
4 |
2 |
5 |
Abbreviation: AI, artificial intelligence.
#
#
Discussion
Summary of Key Findings
In this descriptive literature review, we identified several trends in the 2021 literature published about the clinical implementation of data science applications relevant to nursing. Of the 321 publications screened via full-text review, 11 studies (3.4%) met our search criteria for inclusion. No report of real-world implementation of the data science application was by far the most common reason (n = 200) studies were excluded in this phase. This could be explained by our strict inclusion criteria, i.e., data science applications implemented in real-world practice related to nursing-sensitive indicators. However, by focusing this year in review on systems that were implemented into practice, we identified that scientific literature concerned with implementation of data science in real-world health care settings that relate to nursing-sensitive indicators is lacking. We acknowledge that the pandemic may have influenced the implementation of data science applications and/or the ability of teams to invest time in publication given the need of health systems and settings to focus resources on COVID care.
Based on the 11 studies reviewed, we identified gaps in model targets, patient populations of interest, and in the diversity of setting types where models are implemented. For example, we did not find any studies about the implementation of applications to assess risk for pressure ulcers, falls, or nurse turnover rates, despite these topics being identified as important safety and nursing quality indicators nationally.[33] [34] [35] All studies focused on adult patient populations, identifying a gap in the reporting of the implementation of systems into pediatric care settings. Most systems were deployed in single hospital settings, identifying an opportunity to report on systems implemented in outpatient and community settings, an ongoing gap in the use of data science in clinical practice and population health.[36]
Nurses were involved as target users in about a quarter of the studies and as part of the implementation team in just two studies. This result provides evidence of an active participation of nurses in the field of prediction data science projects that were implemented and evaluated in clinical practice. Perhaps the more frequent inclusion of nursing at the onset of study planning and data science system development could lead to more studies that look at nurse-sensitive outcomes and may also possibly drive further adoption of AI by nursing which traditionally tends to lag.[37]
Studies included in this review described lessons learned and challenges of implementing data science applications in clinical practice; however, the content, structure, and level of detail varied greatly. Therefore, a reporting framework such as the DECIDE-AI framework would support more consistent reporting of data science application implementation so that information can be more easily aggregated and support establishment of best practices. While a relatively high proportion of the DECIDE-AI implementation method checklist items were reported in the studies reviewed, authors were less likely to report the recommended implementation results. In particular, user adherence to use of the system and how the system impacts clinical workflows were not reported. This demonstrates an opportunity for future studies to examine and report on adherence and workflow impact. Minimizing disruption in workflow has been previously cited as a key strategy to support clinician uptake of data science-driven technologies into practice, so measuring both adherence to use of these tools and workflow impact should be a focus of future research.[11] [38]
#
Limitations
Limitations of this review include that our search of the literature was not exhaustive so reports of implementation of data science reviews via other reporting mechanisms, such as conference proceeding, were not captured via our review strategy. This was an exploratory and descriptive review, not meant to be systematic, or preliminary to a meta-analysis. We incorporated rigor in our methods by utilizing two reviewers and group consensus for each step of the review process. A quality assessment of included articles was not performed. When initially planning our inclusion and exclusion criteria, we made a decision to focus on more novel machine learning methods and excluded more traditional linear models, such as logistic regression. In retrospect, if we conduct this type of review again, we would include linear models in the search for implementation reports because some of these types of models often have the added benefit of being more interpretable. This is an important consideration in the implementation of these types of tools into clinical practice.[39]
#
#
Conclusion
Within the year 2021, few research teams reported on the implementation of data science applications relevant to nursing. Although study teams shared lessons learned such as the importance of the involvement of interdisciplinary stakeholders and training for end users, authors did not provide in-depth descriptions of the training and implementation strategies used in practice. Therefore, organizations aiming to implement these tools must do so without specific guidance. The DECIDE-AI framework provides teams with a framework of minimal reporting items that could facilitate both the appraisal and replication of implementation studies at the intersection of data science and health care. We challenge nursing governing bodies, nursing schools, and health care systems to support nursing informaticists in publication of data science application implementations to expedite the integration of these potentially beneficial systems into clinical care.
#
Clinical Relevance Statement
Significant advances have been made in the application of data science methods to clinical phenomena with many high-performing models. However, for these models to improve patient outcomes, careful consideration of how these models are implemented in clinical environments is warranted. In this year-in-review, we found very few studies that examined both model performance and implementation efforts.
#
Multiple Choice Questions
-
What methods were used to identify the literature in this review?
-
Natural language processing
-
Deep learning model
-
Descriptive literature review
-
Systematic literature review
Correct Answer: The correct answer is option c. This review followed a descriptive literature review protocol. We examined data science methods such as natural language processing and deep learning, but these methods were not used to conduct this review. A systematic review follows a more robust protocol.
-
-
Which of the following are elements of the DECIDE-AI Implementation Methods checklist?
-
Number of instances of AI use
-
How the final supported decision was reached and by whom
-
User exposure to AI system
-
Users' adherence to intended implementation
Correct Answer: The correct answer is option b. While all elements above are items on the DECIDE-AI framework checklist, only option b relates to Implementation Methods. a, c, and d relate to Implementation Results.
-
#
#
Conflict of Interest
The authors declare that they have no conflicts of interest in the research.
Protection of Human and Animal Subjects
This research does not involve human subjects.
-
References
- 1 Subrahmanya SVG, Shetty DK, Patil V. et al. The role of data science in healthcare advancements: applications, benefits, and future prospects. Ir J Med Sci 2022; 191 (04) 1473-1483
- 2 Chi W, Andreyeva E, Zhang Y, Kaushal R, Haynes K. Neighborhood-level social determinants of health improve prediction of preventable hospitalization and emergency department visits beyond claims history. Popul Health Manag 2021; 24 (06) 701-709
- 3 Baig MM, GholamHosseini H, Gutierrez J, Ullah E, Lindén M. Early detection of prediabetes and T2DM using wearable sensors and internet-of-things-based monitoring applications. Appl Clin Inform 2021; 12 (01) 1-9
- 4 Zhang X, Pérez-Stable EJ, Bourne PE. et al. Big data science: opportunities and challenges to address minority health and health disparities in the 21st century. Ethn Dis 2017; 27 (02) 95-106
- 5 Hayes CJ, Cucciare MA, Martin BC. et al. Using data science to improve outcomes for persons with opioid use disorder. Subst Abus 2022; 43 (01) 956-963
- 6 Stingone JA, Triantafillou S, Larsen A, Kitt JP, Shaw GM, Marsillach J. Interdisciplinary data science to advance environmental health research and improve birth outcomes. Environ Res 2021; 197: 111019
- 7 Lee TC, Shah NU, Haack A, Baxter SL. Clinical implementation of predictive models embedded within electronic health record systems: a systematic review. Informatics (MDPI) 2020; 7 (03) 25
- 8 Shaw J, Rudzicz F, Jamieson T, Goldfarb A. Artificial intelligence and the implementation challenge. J Med Internet Res 2019; 21 (07) e13659
- 9 Yang C, Kors JA, Ioannou S. et al. Trends in the conduct and reporting of clinical prediction model development and validation: a systematic review. J Am Med Inform Assoc 2022; 29 (05) 983-989
- 10 Schwartz JM, Moy AJ, Rossetti SC, Elhadad N, Cato KD. Clinician involvement in research on machine learning-based predictive clinical decision support for the hospital setting: a scoping review. J Am Med Inform Assoc 2021; 28 (03) 653-663
- 11 Moorman LP. Principles for real-world implementation of bedside predictive analytics monitoring. Appl Clin Inform 2021; 12 (04) 888-896
- 12 Osterman CK, Sanoff HK, Wood WA, Fasold M, Lafata JE. Predictive modeling for adverse events and risk stratification programs for people receiving cancer treatment. JCO Oncol Pract 2022; 18 (02) 127-136
- 13 Topaz M, Pruinelli L. Big data and nursing: implications for the future. Stud Health Technol Inform 2017; 232: 165-171
- 14 Center for Nursing Informatics. Data Science Workgroup Paper. 2019. Accessed June 22, 2023 at: https://nursing.umn.edu/centers/center-nursing-informatics
- 15 Schultz MA, Walden RL, Cato K. et al. Data science methods for nursing-relevant patient outcomes and clinical processes: the 2019 literature year in review. Comput Inform Nurs 2021; 39 (11) 654-667
- 16 Douthit BJ, Walden RL, Cato K. et al. Data science trends relevant to nursing practice: a rapid review of the 2020 literature. Appl Clin Inform 2022; 13 (01) 161-179
- 17 Paré G, Kitsiou S. Methods for literature reviews. Handbook of eHealth Evaluation: An Evidence-based Approach. In: Lau F, Kuziemsky C. eds. Victoria, C: University of Victoria; 2017: 157-179
- 18 Pedregosa F, Varoquaux G, Gramfort A. et al. Scikit-learn: machine learning in Python. J Mach Learn Res 2011; 12: 2825-2830
- 19 Heslop L, Lu S, Xu X. Nursing-sensitive indicators: a concept analysis. J Adv Nurs 2014; 70 (11) 2469-2482
- 20 Covidence systematic review software. 2022. Accessed May 19, 2023 at: www.covidence.org
- 21 Vasey B, Nagendran M, Campbell B. et al; DECIDE-AI expert group. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ 2022; 377: e070904
- 22 Altieri Dunn SC, Bellon JE, Bilderback A. et al. SafeNET: initial development and validation of a real-time tool for predicting mortality risk at the time of hospital transfer to a higher level of care. PLoS One 2021; 16 (02) e0246669
- 23 Fenn A, Davis C, Buckland DM. et al. Development and validation of machine learning models to predict admission from emergency department to inpatient and intensive care units. Ann Emerg Med 2021; 78 (02) 290-302
- 24 Han P, Lee SH, Noro K. et al. Improving early identification of significant weight loss using clinical decision support system in lung cancer radiation therapy. JCO Clin Cancer Inform 2021; 5: 944-952
- 25 Jauk S, Kramer D, Avian A, Berghold A, Leodolter W, Schulz S. Technology acceptance of a machine learning algorithm predicting delirium in a clinical setting: a mixed-methods study. J Med Syst 2021; 45 (04) 48
- 26 Ng R, Tan KB. Implementing an individual-centric discharge process across singapore public hospitals. Int J Environ Res Public Health 2021; 18 (16) 8700
- 27 Strömblad CT, Baxter-King RG, Meisami A. et al. Effect of a predictive model on planned surgical duration accuracy, patient wait time, and use of presurgical resources: a randomized clinical trial. JAMA Surg 2021; 156 (04) 315-321
- 28 Møller JK, Sørensen M, Hardahl C. Prediction of risk of acquiring urinary tract infection during hospital stay based on machine-learning: a retrospective cohort study. PLoS One 2021; 16 (03) e0248636
- 29 Murphree DH, Wilson PM, Asai SW. et al. Improving the delivery of palliative care through predictive modeling and healthcare informatics. J Am Med Inform Assoc 2021; 28 (06) 1065-1073
- 30 Wu CX, Suresh E, Phng FWL. et al. Effect of a real-time risk score on 30-day readmission reduction in Singapore. Appl Clin Inform 2021; 12 (02) 372-382
- 31 Bertsimas D, Boussioux L, Cory-Wright R. et al. From predictions to prescriptions: a data-driven response to COVID-19. Health Care Manage Sci 2021; 24 (02) 253-272
- 32 Wu CT, Li GH, Huang CT. et al. Acute exacerbation of a chronic obstructive pulmonary disease prediction system using wearable device data, machine learning, and deep learning: development and cohort study. JMIR Mhealth Uhealth 2021; 9 (05) e22591
- 33 Gallagher RM, Rowell PA. Claiming the future of nursing through nursing-sensitive quality indicators. Nurs Adm Q 2003; 27 (04) 273-284
- 34 Isis Montalvo M. The national database of nursing quality indicators (TM)(NDNQI). Online J Issues Nurs 2007; 12 (03)
- 35 American Nurses Association. Guidelines for data collection on the American Nurses Association's national quality forum endorsed measures: nursing care hours per patient day; skill-mix; falls; falls with injury. 2010
- 36 Monsen KA, Austin RR, Jones RC, Brink D, Mathiason MA, Eder M. Incorporating a whole-person perspective in consumer-generated data: social determinants, resilience, and hidden patterns. Comput Inform Nurs 2021; 39 (08) 402-410
- 37 Shang Z. A concept analysis on the use of artificial intelligence in nursing. Cureus 2021; 13 (05) e14857
- 38 Watson J, Hutyra CA, Clancy SM. et al. Overcoming barriers to the adoption and implementation of predictive modeling and machine learning in clinical care: what can we learn from US academic medical centers?. JAMIA Open 2020; 3 (02) 167-172
- 39 Stiglic G, Kocbek P, Fijacko N, Zitnik M, Verbert K, Cilar L. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip Rev Data Min Knowl Discov 2020; 10 (05) e1379
Address for correspondence
Publication History
Received: 29 November 2022
Accepted: 03 May 2023
Accepted Manuscript online:
07 May 2023
Article published online:
02 August 2023
© 2023. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Subrahmanya SVG, Shetty DK, Patil V. et al. The role of data science in healthcare advancements: applications, benefits, and future prospects. Ir J Med Sci 2022; 191 (04) 1473-1483
- 2 Chi W, Andreyeva E, Zhang Y, Kaushal R, Haynes K. Neighborhood-level social determinants of health improve prediction of preventable hospitalization and emergency department visits beyond claims history. Popul Health Manag 2021; 24 (06) 701-709
- 3 Baig MM, GholamHosseini H, Gutierrez J, Ullah E, Lindén M. Early detection of prediabetes and T2DM using wearable sensors and internet-of-things-based monitoring applications. Appl Clin Inform 2021; 12 (01) 1-9
- 4 Zhang X, Pérez-Stable EJ, Bourne PE. et al. Big data science: opportunities and challenges to address minority health and health disparities in the 21st century. Ethn Dis 2017; 27 (02) 95-106
- 5 Hayes CJ, Cucciare MA, Martin BC. et al. Using data science to improve outcomes for persons with opioid use disorder. Subst Abus 2022; 43 (01) 956-963
- 6 Stingone JA, Triantafillou S, Larsen A, Kitt JP, Shaw GM, Marsillach J. Interdisciplinary data science to advance environmental health research and improve birth outcomes. Environ Res 2021; 197: 111019
- 7 Lee TC, Shah NU, Haack A, Baxter SL. Clinical implementation of predictive models embedded within electronic health record systems: a systematic review. Informatics (MDPI) 2020; 7 (03) 25
- 8 Shaw J, Rudzicz F, Jamieson T, Goldfarb A. Artificial intelligence and the implementation challenge. J Med Internet Res 2019; 21 (07) e13659
- 9 Yang C, Kors JA, Ioannou S. et al. Trends in the conduct and reporting of clinical prediction model development and validation: a systematic review. J Am Med Inform Assoc 2022; 29 (05) 983-989
- 10 Schwartz JM, Moy AJ, Rossetti SC, Elhadad N, Cato KD. Clinician involvement in research on machine learning-based predictive clinical decision support for the hospital setting: a scoping review. J Am Med Inform Assoc 2021; 28 (03) 653-663
- 11 Moorman LP. Principles for real-world implementation of bedside predictive analytics monitoring. Appl Clin Inform 2021; 12 (04) 888-896
- 12 Osterman CK, Sanoff HK, Wood WA, Fasold M, Lafata JE. Predictive modeling for adverse events and risk stratification programs for people receiving cancer treatment. JCO Oncol Pract 2022; 18 (02) 127-136
- 13 Topaz M, Pruinelli L. Big data and nursing: implications for the future. Stud Health Technol Inform 2017; 232: 165-171
- 14 Center for Nursing Informatics. Data Science Workgroup Paper. 2019. Accessed June 22, 2023 at: https://nursing.umn.edu/centers/center-nursing-informatics
- 15 Schultz MA, Walden RL, Cato K. et al. Data science methods for nursing-relevant patient outcomes and clinical processes: the 2019 literature year in review. Comput Inform Nurs 2021; 39 (11) 654-667
- 16 Douthit BJ, Walden RL, Cato K. et al. Data science trends relevant to nursing practice: a rapid review of the 2020 literature. Appl Clin Inform 2022; 13 (01) 161-179
- 17 Paré G, Kitsiou S. Methods for literature reviews. Handbook of eHealth Evaluation: An Evidence-based Approach. In: Lau F, Kuziemsky C. eds. Victoria, C: University of Victoria; 2017: 157-179
- 18 Pedregosa F, Varoquaux G, Gramfort A. et al. Scikit-learn: machine learning in Python. J Mach Learn Res 2011; 12: 2825-2830
- 19 Heslop L, Lu S, Xu X. Nursing-sensitive indicators: a concept analysis. J Adv Nurs 2014; 70 (11) 2469-2482
- 20 Covidence systematic review software. 2022. Accessed May 19, 2023 at: www.covidence.org
- 21 Vasey B, Nagendran M, Campbell B. et al; DECIDE-AI expert group. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ 2022; 377: e070904
- 22 Altieri Dunn SC, Bellon JE, Bilderback A. et al. SafeNET: initial development and validation of a real-time tool for predicting mortality risk at the time of hospital transfer to a higher level of care. PLoS One 2021; 16 (02) e0246669
- 23 Fenn A, Davis C, Buckland DM. et al. Development and validation of machine learning models to predict admission from emergency department to inpatient and intensive care units. Ann Emerg Med 2021; 78 (02) 290-302
- 24 Han P, Lee SH, Noro K. et al. Improving early identification of significant weight loss using clinical decision support system in lung cancer radiation therapy. JCO Clin Cancer Inform 2021; 5: 944-952
- 25 Jauk S, Kramer D, Avian A, Berghold A, Leodolter W, Schulz S. Technology acceptance of a machine learning algorithm predicting delirium in a clinical setting: a mixed-methods study. J Med Syst 2021; 45 (04) 48
- 26 Ng R, Tan KB. Implementing an individual-centric discharge process across singapore public hospitals. Int J Environ Res Public Health 2021; 18 (16) 8700
- 27 Strömblad CT, Baxter-King RG, Meisami A. et al. Effect of a predictive model on planned surgical duration accuracy, patient wait time, and use of presurgical resources: a randomized clinical trial. JAMA Surg 2021; 156 (04) 315-321
- 28 Møller JK, Sørensen M, Hardahl C. Prediction of risk of acquiring urinary tract infection during hospital stay based on machine-learning: a retrospective cohort study. PLoS One 2021; 16 (03) e0248636
- 29 Murphree DH, Wilson PM, Asai SW. et al. Improving the delivery of palliative care through predictive modeling and healthcare informatics. J Am Med Inform Assoc 2021; 28 (06) 1065-1073
- 30 Wu CX, Suresh E, Phng FWL. et al. Effect of a real-time risk score on 30-day readmission reduction in Singapore. Appl Clin Inform 2021; 12 (02) 372-382
- 31 Bertsimas D, Boussioux L, Cory-Wright R. et al. From predictions to prescriptions: a data-driven response to COVID-19. Health Care Manage Sci 2021; 24 (02) 253-272
- 32 Wu CT, Li GH, Huang CT. et al. Acute exacerbation of a chronic obstructive pulmonary disease prediction system using wearable device data, machine learning, and deep learning: development and cohort study. JMIR Mhealth Uhealth 2021; 9 (05) e22591
- 33 Gallagher RM, Rowell PA. Claiming the future of nursing through nursing-sensitive quality indicators. Nurs Adm Q 2003; 27 (04) 273-284
- 34 Isis Montalvo M. The national database of nursing quality indicators (TM)(NDNQI). Online J Issues Nurs 2007; 12 (03)
- 35 American Nurses Association. Guidelines for data collection on the American Nurses Association's national quality forum endorsed measures: nursing care hours per patient day; skill-mix; falls; falls with injury. 2010
- 36 Monsen KA, Austin RR, Jones RC, Brink D, Mathiason MA, Eder M. Incorporating a whole-person perspective in consumer-generated data: social determinants, resilience, and hidden patterns. Comput Inform Nurs 2021; 39 (08) 402-410
- 37 Shang Z. A concept analysis on the use of artificial intelligence in nursing. Cureus 2021; 13 (05) e14857
- 38 Watson J, Hutyra CA, Clancy SM. et al. Overcoming barriers to the adoption and implementation of predictive modeling and machine learning in clinical care: what can we learn from US academic medical centers?. JAMIA Open 2020; 3 (02) 167-172
- 39 Stiglic G, Kocbek P, Fijacko N, Zitnik M, Verbert K, Cilar L. Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip Rev Data Min Knowl Discov 2020; 10 (05) e1379

