Keywords
Medical informatics - International Medical Informatics Association - Yearbook - Decision
Support Systems
Introduction
This paper serves as the synopsis of the decision support section of the International
Medical Informatics Association (IMIA) Yearbook. It complements the survey paper authored
by Jankovic and Chen [1] where the authors seek to identify the features of clinical decision support systems
(CDSSs) that may contribute to the observed clinician burnout. The aim of the synopsis
is to summarize recent research in the domain of decision support and to select the
best papers published in this field in 2019. This literature review focused on research
works related to CDSSs and computerized provider order entry (CPOE) systems.
The synopsis is organized as follows: the next section summarizes the process for
selecting the best papers on the decision support topic; the following section presents
the results of this year’s selection process, and the last section comments the contributions
of the three best papers, as well as noticeable research works in the field identified
during the whole process.
Paper Selection Method
A comprehensive literature search on topics related to CDSSs and CPOE systems was
performed to identify candidate best papers in two bibliographic databases, the PubMed/Medline
database (from the US National Center for Biotechnology Information) and the Web of
Science® (WoS, from Clarivate Analytics). PubMed is centered on the biomedical and
life sciences literature whereas WoS covers a wider scope of all scientific domains,
including biomedicine and life sciences. Both databases were searched with similar
queries, tailored to the specificities of each one, targeting journal articles published
in 2019, written in English, and related to the aforementioned topics. The adopted
strategy was the same as that used in prior years [2] and is based on four exclusive queries that return four disjoint citation subsets.
The first query (QPub_plain) is based on a plain-text search in PubMed titles and abstracts using keywords. The
second query (QPub_indexed) relies on the PubMed indexing scheme using MeSH terms and results are made exclusive
of the previous set. The third one (QWoS_restricted) is based on a plain-text search in WoS restricted to the two research areas “Medical
Informatics” and “Health Care Sciences & Services”. The fourth query (QWoS_filtered) is based on the same plain-text search used in WoS but filtered by non-relevant
research areas (e.g., Archeology, Dance, Zoology, etc.) and the two research areas of the previous query.
It is of note that the two WoS queries select only non-PubMed-indexed papers that
are supposed to be caught by the two PubMed queries.
A first review of the four subsets of retrieved citations was performed by the two
section editors to select 15 candidate best papers. Following the IMIA Yearbook protocol,
these candidate best papers were then individually reviewed and rated by both section
editors, the chief editor of the Decision Support section, and external reviewers
from the international Medical Informatics community. Based on the reviewers’ ratings
and comments, the Yearbook editorial committee then selected the best papers of the
year in the decision support domain.
Review Results
The 2019 literature search has been performed on January 13, 2020. A total of 1,378
unique references were obtained, distributed as follows: 1,113 for QPub_plain, 130 for QPub_indexed, 19 for QWoS_restricted, and 169 for QWoS_filtered, yielding sub-totals of 1,243 references from PubMed, and 188 from WoS. Compared
to the previous year, the global query retrieved 230 more papers. After a first individual
screening independently performed by both section editors based on the title and abstract
of papers, 115 (not rejected by both section editors) were discussed by the two editors
to achieve a final selection of 15 candidate best papers. After the external review
of these 15 articles, the editorial committee finally selected three of them as best
papers for 2019 [3]
[4]
[5] ([Table 1]). They are discussed in the next section, and summaries of their contents are available
in the Appendix.
Table 1
Best paper selection of articles for the IMIA Yearbook of Medical Informatics 2020
in the section ‘Decision Support’. The articles are listed in alphabetical order of
the first author’s surname.
|
Section
Decision Support
|
-
▪ Hendriks MP, Verbeek XAAM, van Vegchel T, van der Sangen MJC, Strobbe LJA, Merkus
JWS, Zonderland HM, Smorenburg CH, Jager A, Siesling S. Transformation of the National
Breast Cancer Guideline Into Data-Driven Clinical Decision Trees. JCO Clin Cancer
Inform 2019;3:1-14.
-
▪ Kamišalić A, Riaño D, Kert S, Welzer T, Nemec Zlatolas L. Multi-level medical knowledge
formalization to support medical practice for chronic diseases. Data & Knowledge Engineering
2019;119:36–57.
-
▪ Khalifa M, Magrabi F, Gallego B. Developing a framework for evidence-based grading
and assessment of predictive tools for clinical decision support. BMC Med Inform Decis
Mak 2019;19(1):207.
|
Discussion and Outlook
In the first paper, Hendriks et al., [3] propose an approach to the modeling of clinical practice guidelines, which certainly
builds on already existing approaches, but which is systematically conducted in order
to be scalable and used to represent complex guidelines. They promote the formalism
of clinical decision trees (CDTs) as they are both clinically interpretable by healthcare
professionals and computer-interpretable, thus suitable for implementation in data-driven
CDSSs. The disambiguation of textual guidelines is supported first by the formal,
unequivocal, specification of data items used as decision criteria using international
coding systems to enforce interoperability, and second by the representation of guideline
knowledge as CDTs. The method is applied to the Dutch breast cancer guidelines. Sixty
CDTs were built, involving a total of 114 data items, among which, 11 could not be
linked to standard terminologies. The authors report the ambiguity of certain criteria,
which could be subjective or had multiple definitions. The resulting knowledge base
was implemented in a decision support application where it can be interactively browsed
or automatically executed. By modeling guidelines in such a way, this work is a step
forward in the sharing of encoded knowledge.
In the second paper, Kamišalić et al., [4] tackled the issues linked to the formalization of the medical processes used for
managing chronic diseases and their execution in CDSSs. They analyzed the decision-making
dimensions of the therapeutic management of chronic diseases, like those known to
increase the cardio-vascular risk, and identified three basic levels: therapy strategy,
dosage adaptation, and intolerance management. To handle these different aspects consistently,
they propose a formalism called extended Timed Transition Diagram (eTTD). With eTTDs,
they illustrate the multilevel and fine-grained modeling required to capture the contents
of arterial hypertension management guidelines. This detailed demonstration on how
procedural knowledge for hypertension management can be formalized to develop a CDSS
could certainly be used in other medical domains.
The third paper by Khalifa et al., [5] presents a conceptual and practical framework to help assess confidence in predictive
tools. GRASP, for Grade and Assess Predictive Tools, is both a method to look for
evidence from the published literature and an analysis grid. It standardizes the assessment
of the available literature associated to a predictive tool and the grading of its
level of proof. Three phases of evaluation are considered: (i) before the implementation
of the tool to assess both its internal and external validity, (ii) during the implementation
to assess its potential effect and usability, and (iii) after the implementation to
assess its effectiveness and safety. In each phase, the level of evidence can be assessed
from the study design. A qualitative conclusion summarizes the direction of evidence
(positive, negative, mixed). This grid can be considered as similar to existing grids,
for instance the CONSORT statement for clinical trials. However, it gives a rigorous
methodology for a critical appraisal of predictive tools and could be extended to
all kind of CDSSs. It might be a useful tool to extend the evidence-based culture
in the field of medical informatics.
Besides the three best papers selected for the Decision Support section of the 2020
edition of the IMIA Yearbook, several other works retrieved from the literature review
deserve to be cited. Some of them deal with the personalization of decisions. Laleci
et al., [6] propose a scientific and technical approach to develop personalized care plans that
comply with clinical practice guidelines for the management of complex polypathology
situations. Jafarpour et al., [7] propose a solution to dynamically manage the conflicts that can rise in this type
of complex contexts. Ben Souissi et al., [8] introduce the use of health information technology involving multiple criteria decision
to support the choice between antibiotics alternatives. Interestingly, other works
promote the creation and sharing of operational knowledge bases as exemplified by
Hendriks et al., [3]. Thus, Huibers et al., [9] transform the textual STOPP/START criteria into unambiguous definitions mapped to
medical terminologies. Canovas et al., [10] formalize EUCAST expert rules as an ontology and production rules to detect antimicrobial
therapies at risk of failure. Müller et al., [11] propose an open diagnostic knowledge base that can compete with commercial ones.
Replacing humans is another topic of research and Spänig et al., [12] work on two aspects to virtualize a doctor: the automatic acquisition of data through
sensors and speech recognition, and the automation of diagnostic reasoning. Rozenblum
et al.,[13] propose a machine learning method to generate clinically valid alerts to detect
errors in prescriptions.
Acceptability of CDSS is another key point. Kannan et al., [14] propose a method for a CDSS design to best meet a precisely specified and assessable
user purpose. Design alerts may also avoid rejection of CDSSs by caregivers. Fernandes
et al., [15] created algorithms able to aggregate, filter, and reduce the notifications delivered
to healthcare professionals. Amrose et al., [16] tried to understand in real life the impact of alerts on users and to find the actions
they triggered. Finally, it is always interesting to obtain varied evaluation results
of controversial CDSSs. In this respect, Kim et al., [17] evaluated Watson for Oncology in thyroid carcinoma and reported a concordance rate
with local practices considered as too low to adopt the tool.
As evidenced by the number and the variety of works around decision support, research
in the field is very active. This year’s selection highlighted pragmatic works that
promote the transparency and sharing of the knowledge bases used by decision support
tools, as well as the grading of their utility. The ultimate goal is that users could
trust such tools to, then, use them.