CC BY-NC-ND 4.0 · Yearb Med Inform 2022; 31(01): 317-322
DOI: 10.1055/s-0042-1742491
History of Medical Informatics

Ethics in the History of Medical Informatics for Decision-Making: Early Challenges to Digital Health Goals

Casimir A. Kulikowski
Department of Computer Science, Rutgers University, USA
› Author Affiliations
 

Summary

Background: Inclusive digital health prioritizes public engagement through digital literacies and internet/web connectivity for advancing and scaling healthcare equitably by informatics technologies. This is badly needed, largely desirable and uncontroversial. However, historically, medical and healthcare practices and their informatics processes assume that individual clinical encounters between practitioners and patients are the indispensable foundation of clinical practice. This assumption has been dramatically challenged by expansion of digital technologies, their interconnectable mobility, virtuality, surveillance informatics, and the vastness of data repositories for individuals and populations that enable and support them. This article is a brief historical commentary emphasizing critical ethical issues about decisions in clinical interactions or encounters raised in the early days of the field. These questions, raised eloquently by François Grémy in 1985, have become urgently relevant to the equity/fairness, inclusivity and unbiasedness desired of today's pervasive digital health systems.

Objectives: The main goal of this article is to highlight how the personal freedoms of choice, values, and responsibilities arising in relationships between physicians and healthcare practitioners and their patients in the clinical encounter can be distorted by digital health technologies which focus more on efficiency, productivity, and scalability of healthcare processes. Understanding the promise and limitations of early and current decision-support systems and the analytics of community or population data can help place into historical context the often exaggerated claims made today about Artificial Intelligence and Machine Learning “solving” clinical problems with algorithms and data, downplaying the role of the clinical judgments and responsibilities inherent in personal clinical encounters.

Methods: A review of selected early articles in medical informatics is related to current literature on the ethical issues and technological inadequacies involved in the design and implementation of clinical systems for decision-making. Early insights and cautions about the development of decision support technologies raised questions about the ethical responsibilities in clinical encounters where freedom of personal choice can be so easily limited through the constraints from information processing and reliance on prior expertise frequently driven more by administrative rather than clinical objectives. These anticipated many of the deeper ethical problems that have arisen since then in clinical informatics.

Conclusions: Early papers on ethics in clinical decision-making provide prescient commentary on the dangers of not taking into account the complexities of individual human decision making in clinical encounters. These include the excessive reliance on data and experts, and oversimplified models of human reasoning, all of which persist and have become amplified today as urgent questions about how inclusivity, equity, and bias are handled in practical systems where ethical responsibilities of individuals patients and practitioners intertwine with those of groups within professional or other communities, and are central to how clinical encounters evolve in our digital health future.


#

1 Introduction

While issues of ethics in medical informatics related to clinical decision-making came up in discussions in an International Federation for Information Processing-Technical Committee 4 (IFIP-TC4) meeting in Dijon in 1976 [[1]], the presentation by François Grémy at the IFIP-IMIA International Working conference on Computer-Aided Medical Decision Making, held about a decade later in Prague from 30 September to 4 October, 1985 raises unusual and prescient commentaries about the contrasting ethics of individuals vs. communities being a key for understanding the challenges to individual choice and freedom that can arise from informatics systems involved in medical decision making [[2]]. Many of these same challenges have become even more serious today with digital health advancing through distributed, mobile web-based systems driven primarily by economic pressures and related administrative workflow considerations, which frequently downplay the responsibilities and actions of practitioners working on a basis of trust with patients through the personal and unique events of individual clinical encounters. Pervasive information systems with corresponding heath literacy is expected to promote equity with more inclusive distribution of healthcare resources to communities and populations [[3]]. However, this can come at the cost of individual patients and clinical practitioners being “reducible” to data points for analytical purposes. The clinical encounter itself can be reduced to disembodied data interactions through networks of abstracted, impersonal “information spaces” unless new ways of dealing ethically with what has been termed “human-data assemblages” [[4]] can be developed.

The World Health Organization recognizes the limitations of digital health for health care systems in its guideline published in 2019: “The key aim of this guideline is to present recommendations based on a critical evaluation of the evidence on emerging digital health interventions that are contributing to health system improvements, based on an assessment of the benefits, harms, acceptability, feasibility, resource use and equity considerations”. This guideline urges readers to recognize that “ digital health interventions are not a substitute for functioning health systems, and that there are significant limitations to what digital health is able to address”[[5]].

The challenges to individual and community ethical practices of medicine and generalizations to digital health [[6]] have become more acute than ever as a result of the dramatic leaps in digital technologies pervasively influencing all aspects of human life since Grémy quoted Claude Bernard over 30 years ago about medicine being “a science forced to practice before it is ready” [[7]]. Since then there has been rapid acceleration of scientific advances enabling increasing understanding of the manifold illnesses and their complications afflicting humans strongly and effectively enabled by bioinformatics methods advancing investigations and experiments into the foundational biomolecular, and biomedical and social determinants and effects of illnesses in individuals and populations [[8], [9]]. With results for translational medicine still remaining in their infancy, the clinical implications of much of what has been learned are as challenging as ever [[10]]. Thus, Claude Bernard's comment from the 1840s and Grémy's reminder from the 1980s still apply today, and do so even more pointedly now due to the high expectations arising from the broad spectrum of interactions of the scientific insights involved. The complexities of interactions arising from often uninformed expectations about possible clinical impacts of today's informatics and Artificial Intelligence (AI) technologies contribute significantly in aggravating such challenges – and especially the ethical ones surrounding the thorny questions about individual clinical decision-making applicability arising from genomic, multi-species experiments, simulations, and population” and community-based clinical trials, research studies and meta-analyses such as by the Cochrane Collaboration [[11]].

The above considerations make an interpretive review of Grémy's paper more than a historical curiosity, the more so because of what can be inferred from the author's very unique interdisciplinary experiences bridging mathematics, physics, medicine and biostatistics as he sought to develop insights about the early emerging field of medical informatics, which he so strongly influenced in its international evolution [[12]]. The implications for equity and inclusivity of digital health become apparent from the comments in the paper which is unique in its emphasis on the individual and personal nature of the clinical encounter in defining the very basis of scientific approaches to medicine (and by extension healthcare practices more generally) as founded in the ethical need to recognize the free-will or liberty of individuals to define their interpersonal interactions. These ethical freedom requirements of Hippocratic medicine are contrasted to the way in which impersonal abstracted rules and regulations may be applied by members of professional communities with certain interests and according to their ideologies. They frequently treat people as objects – or in the more current emerging data science perspectives viewed as “data points” to be processed, analyzed and interacted with in an infosphere or metaverse. The contrasts between the needs of individual freedoms and the constraints of the community-imposed rules codified into algorithmically-implemented requirements in software systems is identified by Grémy as examples of types of “terrorisms” that frequently exaggerate the necessity of imposing such rules on the conduct of medical encounters that ought to ethically respect both the patient and the practitioner in their individualities and freedoms subject to the wisdom of the Hippocratic advice dating back to more than 2,500 years. This unusually frank and blunt characterization of the ultimate force of terror for controlling people through authority based on different types of economic, social, and professionally imposed constraints is presented as contradictory to the opposite embrace of a patient-centered “libertarian terrorism”, which exaggerates respect for individual rights and liberty. Such considerations in 2020-2022 are acutely relevant as result of a couple of very different, but arguably “terrorism-related” types of medical ethics challenges to inclusivity, and equity in clinical practice. One is the so-called “anti-vax” community-involved actions frequently amplified by manipulative politicians who justify such exaggerations by the uses of what Grémy identifies as “philosophical terrorisms”. In the last two years, such exaggerations have been seen on full display during often violent protests arising in reaction to public health measures designed to control the spread of highly infectious and often severe viral variants from the COVID-19 pandemic [[13], [14]]. The second is the set of rapidly advancing legal challenges to the freedoms of individual women to control their own reproductive health in the United States, as the culmination of decades-old political campaigns by various anti-abortion groups, which have also been known to advance their cause by encouraging acts of criminal terrorism [[15]].

We believe that the digital health so fervently desired by most of us individually for ourselves and those whom we love and are close to, is increasingly threatened by the impositions of ethically questionable social pressures that constrain our individual “health liberty” choices in increasingly threatening ways by a toxic mix of the contradictory “terrorisms” identified by Grémy as the:

  1. Exaggerations of personal individual choices of patients and physicians which can derail their respectful and reasonable interactions under Hippocratic guidance criteria;

  2. Exaggerations of economic constraints that deprive patients and physicians of responsibility and freedom in the clinical encounter through the means of informatics systems financially and legally imposed or constrained by corporate or government authorities,

  3. Philosophical – including religious – constraints which may be exaggerations of “very respectable reflections on the meaning of human life (e.g., discussions about human reproduction and abortion)”;

  4. Methodological constraints which Grémy characterizes as the “severe judgment of statisticians, epidemiologists, decision analysts, & on medical action” [[2], p. 17].

In short, Grémy's warnings anticipate the ways in which a mixture of professional practices enforcing community-based economic, ideological and methodological biases, and the incredibly pervasive influence of information technologies and social media, when politically driven by extreme ideologies of individualism, can become weaponized through a spectrum of exaggerated “libertarian terrorism”, “economical terrorism”, “philosophical – including religious – terrorism”, and “methodological terrorism”, which enable and amplify the control of individuals and groups of people by those in power, and significantly threaten the mutual respect and liberty that individuals have enjoyed as practitioners in solidarity with patients when following the best of Hippocratic-inspired traditional ethical practices in the clinical encounter.


#

2 Medical Ethics and Informatics in Digital Health for Clinical Decision-Making

The practical, philosophical and religious issues involved in discussions of ethical principles in the practice of medicine, nursing and implicitly healthcare more generally have a long history in most major cultures of the world, as described in great detail in books such as The Cambridge World History of Medical Ethics [[16]], Beauchamp and Childress' Principles of Biomedical Ethics [[17]], Benjamin and Curtis' Ethics in Nursing [[18]], Murphy's Underpinnings of Medical Ethics [[19]] and others [[20]]. The many ethical issues related to the introduction of clinical informatics into the practices of both medicine and nursing have been recognized and discussed for at least the last three decades [[21] [22] [23] [24]].

In medical clinical decision-making AI applications intended to summarize expert knowledge and serve as consultants to less specialized or experienced clinicians or medical students [[25] [26] [27] [28]], ethical issues were touched on historically early by Szolovits and Pauker [[29]] in an article provocatively entitled “Computers and Clinical Decision-Making: Whether, How, and For Whom?”. In this article, they point out two major obstacles for such AI consultation approaches: “first, acceptance as a means of improving care or lightening the physicians' load, and second, acceptance of the advice provided in an individual consultation, especially if that advice runs counter to the physician's own intuition”. The assumption was, however, that the physician would have the ultimate responsibility to accept or reject advice from a system. After discussing the question of which knowledge representation and inference methodologies ought to be incorporated in consultation programs, the authors broach the issue of whether such a computer program can be really worthwhile and whether it ought to be released. They propose a hierarchical evaluation process of testing and evaluating against a database or panel of cases, which can be used to test any modifications of the program to improve its performance. Starting with prototypical cases, they assume an AI system could be built to search systematically for inconsistencies in the clinical program, first retrospectively comparing program performance to that of unaided physicians, and then prospectively and finally in a controlled clinical trial against a panel of experts “blinded as to which decision-maker they are evaluating”. They caution that in this last phase it is important to avoid the “Hawthorne effect” of physicians improving their decision-making performance when they know they are being scrutinized. All these were reasonable recommendations, but involve some critical assumptions which still present problematic issues for population-based computer decision-support aids today [[30]]:

  1. The categorizations of diagnostic and treatment criteria are well-defined, consistent, unchanging over time, and comparable for all the practitioners and for the expert or “knowledge-based” program being evaluated;

  2. Data samples are representative of the clinical problems afflicting the very different patients from different practices, environments, and genomic and developmental backgrounds;

  3. The knowledge-bases of decision-rules or their contextual “frames” defining the clinical meaning of diagnostic, prognostic, or treatment criteria are consistent, or “aligned” in some way;

  4. The probabilistic or heuristic inference and action rules of the program are somehow comparable in terms of end-point (outcome) classification/prediction performance to that of the expert physicians;

  5. Aggregations or groupings of performance data and groupings of clinical hypotheses and actions are somehow consistent and comparable also... and do not require dynamic re-definition as clinical problem-solving progresses (or in today's terminology – that static ontologies suffice);

  6. Considerations of visualization or use of metaphors in clinical reasoning are not essential; ...and many more.

Most of the points made in the paper [[29]] were related to well-established criteria from biostatistical and epidemiological studies, and routinely considered in earlier statistical and pattern recognition models of decision-making understood to be mathematical models for analysis that might capture elements of clinical cognition or decision-making – which were to be used only as adjuncts to the judgment of expert practitioners. This point is made abundantly clear in [[27]]: “But, even in its early days, it was recognized that computer decision aids, as any new tool employed in medicine, must be shown to be safe and effective before it can be ethically and legally sanctioned for general use” [[29], p.1225]. The statistical approaches to evaluate clinical decision-making systems employing probabilistic [[1], [31] [32] [33] [34] [35]] and pattern recognition [[36], [37]] methods both before and after the knowledge-based first generation of AI in medicine [[30]], all concentrated on populations of patients matching the diagnostic or therapeutic criteria serving as end-points or outcomes for measuring the performance of the system under study. The major challenge facing researchers was to have their expert systems become reliable and sufficiently flexible and generalizable at a reasonable cost in updating so that they could scale up for realistic clinical practice with informed and ethical practitioners. Efforts to use statistical methods for generalizing and specializing rule-based expert systems worked for relatively well-defined domains of clinical knowledge [[38]], and prototype aids complementing commercial clinical laboratory instruments [[39], [40]], where responsible physicians would always remain the ultimate judges for acceptance in any clinical encounter with patients. This presented major practical problems for widespread adoption of computerized decision support aids then, and it remains so today. In a paper from 2014, Bayesian networks are shown to be useful for structuring and learning from records of patient encounters in an emergency room setting to create adaptive order menus that summarize past clinician behavior from local empirical data to enhance decision support [[41]]. However, the authors also emphasize that this approach, which relies on Condorcet's jury theorem about expecting statistical improvement in answers from averaging independent decision-makers judgments, only applies to models of small decision-making problems and to fairly high-level medical decisions since otherwise “crowd decisions can become crowd madness when decision-makers are not truly independent but are influenced by some outside entity”, quoting [[42]]. This comment is particularly relevant to today's problems with digital health inclusivity, where the desirable goals of group-level inclusion and equity can bias the individual constraints and responsibilities that need to apply to individual patients interacting with individual practitioners.

A major argument in the early paper by Grémy anticipating such cautions is a critique of both the expert knowledge-based focus of much of the 1970s and 1980s AI systems, as well as cautions against taking too literally the application of statistical models of decision-making as a methodological basis for understanding individual clinical encounters. In this way, his views agree with the statement by Claude Bernard “Men who have excessive faith in their theories or ideas are not only ill prepared for making discoveries; they also make very poor observations. Of necessity, they observe with a preconceived idea, and when they devise an experiment, they can see, in its results, only a confirmation of their theory. In this way they distort observation and often neglect very important facts because they do not further their aim.” [[6]]. Grémy contraposes the needs of the patient to participate with the physicians by demanding information “to take actively part, as much as possible, of the decision process which concern his health”. In addition, he emphasizes the patient's right to not be simply an object of observation, but rather to be encouraged by the practitioner to participate in solidarity as a “donator [sic] of his information for teaching and/or research”. Linking today's patient encounter with yesterday's as part of an experimental protocol is encouraged as a way of compensating the patient's “loss of liberty” by “meticulousness and seriousness of the follow-up”. The paper concludes with the strong warning that “Let us remember that our methods are no more than an aid, a tool for clarifying alternatives, for precising [sic] the judgment criteria. But they do not make the decision. This one depends on the system of values which escapes from our own competence”. Such a caution is particularly important given the facile way in which AI and Machine Learning (ML) are today being touted as ways for scaling up and improving the practices of digital health, and ever more so during the present COVID-19 pandemic [[43]]. The inherent biases and lack of explainable justifications of reasoning for AI/ML results is being increasingly recognized and reported [[44] [45] [46]], adding to the more general concerns about the ethical, automation bias, and safety issues surrounding computers and informatics for clinical decision-making which have been increasingly addressed over the past years [[47] [48] [49]]. Of particular concern are the possibilities raised by digital health systems leading to new kinds of harm [[50]], potentially becoming a new type of iatrogenic illness [[51]]. A recent review article on challenges related to patient safety arising from health information technology [[52]] identifies nine major challenges for three Information Technology (IT) lifecycle stages: design and development, implementation and use, and monitoring, evaluation and optimization. This builds on an earlier systematic review covering the effects of health IT on delivery of care and patient outcomes, identifying reported problems and harms over 34 studies from 6 countries worldwide [[53]]. Major sources of clinical errors related to IT involved issues of system functionality, poor user interfaces, fragmented displays and delayed care delivery.

Yet, despite the above, there is no question that well-developed criteria for the applications of informatics in digital health have great potential for improving overall healthcare policy guidance, as a major study published in the journal Nature recently reported in its focus on digital inclusion as a major social determinant of health [[54]].


#

3 Conclusions: Historical Precedents for Caution about Digital Health to Avoid the Harms of “Terrorisms” that it Can Lead to in Affecting Ethical Clinical Encounters

The vulnerabilities and shortcomings of algorithmically-driven decisions for healthcare arise from not taking into account the individual human clinical encounter and the ethical interaction between individuals where practitioners are guided by criteria of Hippocratic practice [[55]] as emphasized by François Grémy in his 1985 article. Current trends towards exaggerated promises for the capabilities of unexplainable AI methods can lead to bias and lack of reliability and equity of treatment in clinical decision-making regardless of the efficiencies and scalability expected of them for enterprise-level productivity. Informed clinical judgment and personalized medical or nursing care and responsibility are not taken into account much if at all with abstracted and reductionistic methodologies. While these can be useful for analyzing clinical data, the “metaverse ecologies” that are being spawned to extract administratively useful but frequently clinically irrelevant information from individual patients becomes at best a distraction, and at worst a tool for inflicting the frequently conflicting “terrorisms” (hyper-individual vs. community-driven-economic, philosophical and professional impositions) identified by Grémy. This contrasts to the need for tools that will foster augmenting “freedoms” of individual intelligent capabilities through literacy, knowledge, and education, which informatics systems are ideally intended to encourage so as to promote inclusivity and equity across patient groups. Since it is recognized philosophically that machine-based systems cannot have ethics of their own despite attempts to postulate how these might be constructed [[56]], leading to very deep and extensive philosophical debates [[57]], digital health systems need to be designed to support renewal of close and re-personalized relations between caregivers and patients with the goal of individualizing trustworthy care for every individual patient. This is tragically often neglected in today's technology-and-business-driven practices despite the early warnings discussed in the present paper.


#
#

No conflict of interest has been declared by the author(s).

Acknowledgments

The author wishes to thank his colleague Victor Maojo, M.D., Ph.D., and the reviewers and editors of the IMIA Yearbook for their comments and critiques of an earlier draft of this paper. All remaining shortcomings are the responsibility of the author.

  • References

  • 1 Lusted LB. Clinical Decision-making. In: de Dombal FT, Grémy F, editors. Decision-Making and Medical Care. Amsterdam: North Holland; 1976. p. 77- 98.
  • 2 Grémy F. Ethics of the person and of the community in medical decision making. In: van Bemmel JH, Grémy F, Zvarova J, editors. Medical decision making: diagnostic strategies and expert systems. Amsterdam: North Holland; 1985. p. 12 – 18.
  • 3 Azzopardi-Muscat N, Sorensen K. Towards an equitable digital public health era: promoting equity through a health literacy perspective. Eur J Public Health 2019;29(Supplement_3):13-7.
  • 4 Lupton D. How do data come to matter? Living and becoming with personal data. Big Data and Society 2018:1-11.
  • 5 WHO guideline: recommendations on digital interventions for health system strengthening. Geneva: World Health Organization; 2019. Licence: CC BY-NC-SA 3.0 IGO.
  • 6 Bernard C. An introduction to the study of experimental medicine. Dover Books (English Translation); 1961.
  • 7 Shaw JA, Donia J. The sociotechnical ethics of digital health: A critique and extension of approaches from bioethics. Front Digit Health Sept 2021(3):725088.
  • 8 Maojo V, Kulikowski CA. Medical informatics and bioinformatics: integration or evolution through scientific crises? Methods Inf Med 2006;45(5):474-82.
  • 9 Kuznetsov V, Lee HK, Maurer-Stroh S, Molnár MJ, Pongor S, Eisenhaber S, et al. How bioinformatics influences health informatics: usage of biomolecular sequences, expression profiles and automated microscopic image analyses for clinical needs and public health. Health Inf Sci Syst 2013;1:2.
  • 10 Fernandez-Moure JS. Lost in translation: The gap in scientific advancements and clinical applications. Front Bioeng Biotechnol 2016 Jun 3;4:43.
  • 11 Stacey D, Legare F, Lewis K., et al, Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2017 Apr 12;4(4):CD001431.
  • 12 Degoulet P, Fieschi M, Goldberg M, Salamon R. François Grémy, a humanist and information sciences pioneer. Yearb Med Inform 2014;9(1):3-5.
  • 13 Benoit S, Mauldin RF. The “anti-vax” movement: a quantitative report on vaccine beliefs and knowledge across social media. BMC Public Health 2021 Nov 17;21(1):2106.
  • 14 Kulikowski CA. Pandemics: Historically Slow “Learning Curve” Leading to Biomedical Informatics and Vaccine Breakthroughs. Yearb Med Inform 2021 Aug;30(1):290-301.
  • 15 Sneed T. Supreme Court ruling on Texas law was the result of decades of pressure from anti-abortion groups to shape the court; CNN December 13, 2021. Available from: https://www.cnn.com/2021/09/04/politics/abortion-legal-strategy-roe-v-wade-texas-abortion-ban/index.html
  • 16 Baker RB, McCullough LB, editors. The Cambridge World History of Medical Ethics. Cambridge University Press; 2008.
  • 17 Beauchamp T., Childress JF. Principles of biomedical ethics. New York, NY: Oxford University Press; 1977.
  • 18 Benjamin M, Curtis J. Ethics in Nursing (3rd Ed.). New York, NY: Oxford University Press: 1992.
  • 19 Murphy EA, Butzow JJ, Suarez-Murias EL. Underpinnings of Medical Ethics. Johns Hopkins University Press; 1997.
  • 20 Husted GL, Husted JH. Ethical decision-making in nursing (2nd Ed). New York, NT: Mosby; 1995.
  • 21 Goodman KW, editors. Ethics, Computing, and Medicine. Cambridge University Press; 1998.
  • 22 Goodman KW, Cushman R, Miller RA. Ethics in biomedical health informatics: Users, standards and outcomes. In Shortliffe EH, Cimino JJ, editors. Biomedical Informatics, 4th Edition. London: Springer; 2014.
  • 23 Phillips W. Ethical controversies about proper health informatics practices. Mo Med 2015 Jan-Feb 112(1):53-7.
  • 24 Curtin L. Ethics in Informatics: The intersection of nursing, ethics and information technology. Nurs Admin Q 2005;29(4):349-52.
  • 25 Weiss SM. A system for model-based computer-aided diagnosis and therapy. Thesis. Rutgers University; 1974.
  • 26 Shortliffe EH. Computer-Based Medical Consultation: MYCIN. New York: Elsevier; 1976.
  • 27 Miller RA, Pople HE, Myers JD. INTERNIST-1, An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine. N Eng J Med 1982;307:478-86.
  • 28 Szolovits P, Pauker SG. Categorical and probabilistic reasoning in medical diagnosis. Artif Intell 1978;11:115-44.
  • 29 Szolovits P, Pauker SG. Computers and Clinical Decision Making: Whether, How, and For Whom? Proc IEEE 1979;67(9):1224-6.
  • 30 Kulikowski CA. Beginnings of Artificial Intelligence in Medicine (AIM): Computational Artifice Assisting Scientific Inquiry and Clinical Art – with Reflections on Present AIM Challenges. Yearb Med Inform 2019;28(01):249-56.
  • 31 Ledley RS, Lusted LB. Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science 1959 Jul 3;130(3366):9-21.
  • 32 Warner HR, Toronto AF, Veasy LG, Stephenson R. A mathematical approach to medical diagnosis. Application to congenital heart disease. JAMA 1961 Jul 22;177:177-83.
  • 33 Jacquez JA. The diagnostic process: the proceedings of a conference sponsored by the Biomedical Data Processing Training Program of the University of Michigan held at the University of Michigan Medical School; 1963.
  • 34 Collen M. Automated multiphasic screening as a diagnostic method for preventive medicine. Methods Inf Med 1965;4:71-4.
  • 35 Gorry GA, Barnett GO. Experience with a model of medical diagnosis. Comp Biomed Res 1968;1(5):490-507.
  • 36 Steinberg CA, Abraha S, Caceres CA. Pattern recognition in the clinical electrocardiogram. IRE Trans Biomed Electron 1962;9(1):23-30.
  • 37 Kulikowski CA. A pattern recognition approach to medical diagnosis. IEEE Trans Syst Science Cybernetics 1970;6(3):173-8.
  • 38 Politakis P, Weiss S. Using Empirical analysis to refine expert system knowledge bases. Artif Intell 1984;22(1):23-48.
  • 39 Weiss SM, Kulikowski CA, Galen RS. Developing microprocessor-based expert models for instrument interpretation. Proc 7th Joint Conf on Artificial Intelligence; 1981. p. 853-5.
  • 40 Aikins JS, Kunz JC, Shortliffe EH, Fallat RJ. PUFF: An expert system for interpretation of pulmonary function data. Com Biomed Res 1983;16:199-208.
  • 41 Klann JG, Szolovits P, Downs S, Schadow G. Decision Support from Local Data: Creating Adaptive Order Menus from Past Clinician Behavior. J Biomed Inform 2014 Apr:48:84-93.
  • 42 Austen-Smith D, Banks JS. Information aggregation, rationality, and the Condorcet jury theorem. The American Political Science Review 1996;90:34-45.
  • 43 Kulikowski CA, Maojo V. COVID-19 pandemic and artificial intelligence: challenges of ethical bias and trustworthy reliable reproducibility? BMJ Health Care Inform 2021:28:e100438.
  • 44 Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health 2021;3(11):e745-e750.
  • 45 Le Thien M-A, Redjdal A, Bouaud J, Serrousi B. Deep Learning, a not so magical problem solver: A Case Study with Predicting the complexity of breast cancer cases. In: Delgado J, Benis A, de Toledo P, Gallos P, Giacomini M, Martinez-Garcia A, et al, editors. Applying FAIR Principles to Accelerate Health Research in Europe in the Post-COVID-19 Era). EFMI and IOS Press; 2021.
  • 46 Pearl J. The limitations of opaque learning machines. In: Brockman J, editors. Possible Minds: 25 Ways of Looking at AI. Penguin Press; 2019.
  • 47 Lyell D, Coiera E. Automation bias and verification complexity: a systematic review. J Am Med Inform Assoc 2017 Mar 1;24(2):423-31.
  • 48 Akbar S, Coiera E, Magrabi F. Safety concerns with consumer-facing mobile health applications and their consequences: a scoping review. J Am Med Inform Assoc Feb 2020;27(2):330”40.
  • 49 Coiera E, Baker M, Margabi F. First compute no harm. The BMJ Opinion July 19, 2017. Available from: https://blogs.bmj.com/bmj/2017/07/19/enrico-coiera-et-al-first-compute-no-harm/
  • 50 Coeira E, Aarts J, Kulikowski CA. The dangerous decade. J Am Med Inform Assoc 2012;19(1):2-5.
  • 51 Sharpe VA, Faden AI. Medical Harm: Historical, Conceptual, and Ethical Dimensions of Iatrogenic Illness. Cambridge University Press; 1998.
  • 52 Sittig DF, Wright A, Coeira E, Magrabi F, Ratwani R, Bates DW, et al. Current challenges in health information technology-related patient safety. Health Informatics J 2020;26(1):181-89.
  • 53 Kim MO, Coeira E, Magrabi F. Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review. J Am Med Inform Assoc 2017;24(2):246-50.
  • 54 Sieck CJ, Sheon A, Ancker JS, Castek J, Callahan B, Siefer A. NPJ Digit Med 2021;4:52.
  • 55 Hippocrates. Of the Epidemics (Adams F, Translation). Available from: http://classics.mit.edu/Hippocrates/epidemics.1.i.html
  • 56 Andersen M, Andersen SL. Machine Ethics. Cambridge, Cambridge University Press; 2011.
  • 57 Stanford Encyclopedia of Philosophy. Ethics of Artificial Intelligence and Robotics. First Published April 30, 2020. Available from: https://plato.stanford.edu/entries/ethics-ai/

Correspondence to:

Casimir A. Kulikowski
Department of Computer Science, Rutgers – The State University of New Jersey
Piscataway, NJ 08855
USA   

Publication History

Article published online:
02 June 2022

© 2022. IMIA and Thieme. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Lusted LB. Clinical Decision-making. In: de Dombal FT, Grémy F, editors. Decision-Making and Medical Care. Amsterdam: North Holland; 1976. p. 77- 98.
  • 2 Grémy F. Ethics of the person and of the community in medical decision making. In: van Bemmel JH, Grémy F, Zvarova J, editors. Medical decision making: diagnostic strategies and expert systems. Amsterdam: North Holland; 1985. p. 12 – 18.
  • 3 Azzopardi-Muscat N, Sorensen K. Towards an equitable digital public health era: promoting equity through a health literacy perspective. Eur J Public Health 2019;29(Supplement_3):13-7.
  • 4 Lupton D. How do data come to matter? Living and becoming with personal data. Big Data and Society 2018:1-11.
  • 5 WHO guideline: recommendations on digital interventions for health system strengthening. Geneva: World Health Organization; 2019. Licence: CC BY-NC-SA 3.0 IGO.
  • 6 Bernard C. An introduction to the study of experimental medicine. Dover Books (English Translation); 1961.
  • 7 Shaw JA, Donia J. The sociotechnical ethics of digital health: A critique and extension of approaches from bioethics. Front Digit Health Sept 2021(3):725088.
  • 8 Maojo V, Kulikowski CA. Medical informatics and bioinformatics: integration or evolution through scientific crises? Methods Inf Med 2006;45(5):474-82.
  • 9 Kuznetsov V, Lee HK, Maurer-Stroh S, Molnár MJ, Pongor S, Eisenhaber S, et al. How bioinformatics influences health informatics: usage of biomolecular sequences, expression profiles and automated microscopic image analyses for clinical needs and public health. Health Inf Sci Syst 2013;1:2.
  • 10 Fernandez-Moure JS. Lost in translation: The gap in scientific advancements and clinical applications. Front Bioeng Biotechnol 2016 Jun 3;4:43.
  • 11 Stacey D, Legare F, Lewis K., et al, Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2017 Apr 12;4(4):CD001431.
  • 12 Degoulet P, Fieschi M, Goldberg M, Salamon R. François Grémy, a humanist and information sciences pioneer. Yearb Med Inform 2014;9(1):3-5.
  • 13 Benoit S, Mauldin RF. The “anti-vax” movement: a quantitative report on vaccine beliefs and knowledge across social media. BMC Public Health 2021 Nov 17;21(1):2106.
  • 14 Kulikowski CA. Pandemics: Historically Slow “Learning Curve” Leading to Biomedical Informatics and Vaccine Breakthroughs. Yearb Med Inform 2021 Aug;30(1):290-301.
  • 15 Sneed T. Supreme Court ruling on Texas law was the result of decades of pressure from anti-abortion groups to shape the court; CNN December 13, 2021. Available from: https://www.cnn.com/2021/09/04/politics/abortion-legal-strategy-roe-v-wade-texas-abortion-ban/index.html
  • 16 Baker RB, McCullough LB, editors. The Cambridge World History of Medical Ethics. Cambridge University Press; 2008.
  • 17 Beauchamp T., Childress JF. Principles of biomedical ethics. New York, NY: Oxford University Press; 1977.
  • 18 Benjamin M, Curtis J. Ethics in Nursing (3rd Ed.). New York, NY: Oxford University Press: 1992.
  • 19 Murphy EA, Butzow JJ, Suarez-Murias EL. Underpinnings of Medical Ethics. Johns Hopkins University Press; 1997.
  • 20 Husted GL, Husted JH. Ethical decision-making in nursing (2nd Ed). New York, NT: Mosby; 1995.
  • 21 Goodman KW, editors. Ethics, Computing, and Medicine. Cambridge University Press; 1998.
  • 22 Goodman KW, Cushman R, Miller RA. Ethics in biomedical health informatics: Users, standards and outcomes. In Shortliffe EH, Cimino JJ, editors. Biomedical Informatics, 4th Edition. London: Springer; 2014.
  • 23 Phillips W. Ethical controversies about proper health informatics practices. Mo Med 2015 Jan-Feb 112(1):53-7.
  • 24 Curtin L. Ethics in Informatics: The intersection of nursing, ethics and information technology. Nurs Admin Q 2005;29(4):349-52.
  • 25 Weiss SM. A system for model-based computer-aided diagnosis and therapy. Thesis. Rutgers University; 1974.
  • 26 Shortliffe EH. Computer-Based Medical Consultation: MYCIN. New York: Elsevier; 1976.
  • 27 Miller RA, Pople HE, Myers JD. INTERNIST-1, An Experimental Computer-Based Diagnostic Consultant for General Internal Medicine. N Eng J Med 1982;307:478-86.
  • 28 Szolovits P, Pauker SG. Categorical and probabilistic reasoning in medical diagnosis. Artif Intell 1978;11:115-44.
  • 29 Szolovits P, Pauker SG. Computers and Clinical Decision Making: Whether, How, and For Whom? Proc IEEE 1979;67(9):1224-6.
  • 30 Kulikowski CA. Beginnings of Artificial Intelligence in Medicine (AIM): Computational Artifice Assisting Scientific Inquiry and Clinical Art – with Reflections on Present AIM Challenges. Yearb Med Inform 2019;28(01):249-56.
  • 31 Ledley RS, Lusted LB. Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science 1959 Jul 3;130(3366):9-21.
  • 32 Warner HR, Toronto AF, Veasy LG, Stephenson R. A mathematical approach to medical diagnosis. Application to congenital heart disease. JAMA 1961 Jul 22;177:177-83.
  • 33 Jacquez JA. The diagnostic process: the proceedings of a conference sponsored by the Biomedical Data Processing Training Program of the University of Michigan held at the University of Michigan Medical School; 1963.
  • 34 Collen M. Automated multiphasic screening as a diagnostic method for preventive medicine. Methods Inf Med 1965;4:71-4.
  • 35 Gorry GA, Barnett GO. Experience with a model of medical diagnosis. Comp Biomed Res 1968;1(5):490-507.
  • 36 Steinberg CA, Abraha S, Caceres CA. Pattern recognition in the clinical electrocardiogram. IRE Trans Biomed Electron 1962;9(1):23-30.
  • 37 Kulikowski CA. A pattern recognition approach to medical diagnosis. IEEE Trans Syst Science Cybernetics 1970;6(3):173-8.
  • 38 Politakis P, Weiss S. Using Empirical analysis to refine expert system knowledge bases. Artif Intell 1984;22(1):23-48.
  • 39 Weiss SM, Kulikowski CA, Galen RS. Developing microprocessor-based expert models for instrument interpretation. Proc 7th Joint Conf on Artificial Intelligence; 1981. p. 853-5.
  • 40 Aikins JS, Kunz JC, Shortliffe EH, Fallat RJ. PUFF: An expert system for interpretation of pulmonary function data. Com Biomed Res 1983;16:199-208.
  • 41 Klann JG, Szolovits P, Downs S, Schadow G. Decision Support from Local Data: Creating Adaptive Order Menus from Past Clinician Behavior. J Biomed Inform 2014 Apr:48:84-93.
  • 42 Austen-Smith D, Banks JS. Information aggregation, rationality, and the Condorcet jury theorem. The American Political Science Review 1996;90:34-45.
  • 43 Kulikowski CA, Maojo V. COVID-19 pandemic and artificial intelligence: challenges of ethical bias and trustworthy reliable reproducibility? BMJ Health Care Inform 2021:28:e100438.
  • 44 Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health 2021;3(11):e745-e750.
  • 45 Le Thien M-A, Redjdal A, Bouaud J, Serrousi B. Deep Learning, a not so magical problem solver: A Case Study with Predicting the complexity of breast cancer cases. In: Delgado J, Benis A, de Toledo P, Gallos P, Giacomini M, Martinez-Garcia A, et al, editors. Applying FAIR Principles to Accelerate Health Research in Europe in the Post-COVID-19 Era). EFMI and IOS Press; 2021.
  • 46 Pearl J. The limitations of opaque learning machines. In: Brockman J, editors. Possible Minds: 25 Ways of Looking at AI. Penguin Press; 2019.
  • 47 Lyell D, Coiera E. Automation bias and verification complexity: a systematic review. J Am Med Inform Assoc 2017 Mar 1;24(2):423-31.
  • 48 Akbar S, Coiera E, Magrabi F. Safety concerns with consumer-facing mobile health applications and their consequences: a scoping review. J Am Med Inform Assoc Feb 2020;27(2):330”40.
  • 49 Coiera E, Baker M, Margabi F. First compute no harm. The BMJ Opinion July 19, 2017. Available from: https://blogs.bmj.com/bmj/2017/07/19/enrico-coiera-et-al-first-compute-no-harm/
  • 50 Coeira E, Aarts J, Kulikowski CA. The dangerous decade. J Am Med Inform Assoc 2012;19(1):2-5.
  • 51 Sharpe VA, Faden AI. Medical Harm: Historical, Conceptual, and Ethical Dimensions of Iatrogenic Illness. Cambridge University Press; 1998.
  • 52 Sittig DF, Wright A, Coeira E, Magrabi F, Ratwani R, Bates DW, et al. Current challenges in health information technology-related patient safety. Health Informatics J 2020;26(1):181-89.
  • 53 Kim MO, Coeira E, Magrabi F. Problems with health information technology and their effects on care delivery and patient outcomes: a systematic review. J Am Med Inform Assoc 2017;24(2):246-50.
  • 54 Sieck CJ, Sheon A, Ancker JS, Castek J, Callahan B, Siefer A. NPJ Digit Med 2021;4:52.
  • 55 Hippocrates. Of the Epidemics (Adams F, Translation). Available from: http://classics.mit.edu/Hippocrates/epidemics.1.i.html
  • 56 Andersen M, Andersen SL. Machine Ethics. Cambridge, Cambridge University Press; 2011.
  • 57 Stanford Encyclopedia of Philosophy. Ethics of Artificial Intelligence and Robotics. First Published April 30, 2020. Available from: https://plato.stanford.edu/entries/ethics-ai/