Methods Inf Med 2001; 40(05): 380-385
DOI: 10.1055/s-0038-1634196
Original Article
Schattauer GmbH

Acceptance of Rules Generated by Machine Learning among Medical Experts

M. J. Pazzani
1   Department of Information and Computer Science, University of California, Irvine, USA
,
S. Mani
1   Department of Information and Computer Science, University of California, Irvine, USA
,
W. R. Shankle
2   Department of Neurology, University of California, Irvine, USA
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
08. Februar 2018 (online)

Summary

Objectives: The aim was to evaluate the potential for monotonicity constraints to bias machine learning systems to learn rules that were both accurate and meaningful.

Methods: Two data sets, taken from problems as diverse as screening for dementia and assessing the risk of mental retardation, were collected and a rule learning system, with and without monotonicity constraints, was run on each. The rules were shown to experts, who were asked how willing they would be to use such rules in practice. The accuracy of the rules was also evaluated.

Results: Rules learned with monotonicity constraints were at least as accurate as rules learned without such constraints. Experts were, on average, more willing to use the rules learned with the monotonicity constraints.

Conclusions: The analysis of medical databases has the potential of improving patient outcomes and/or lowering the cost of health care delivery. Various techniques, from statistics, pattern recognition, machine learning, and neural networks, have been proposed to “mine” this data by uncovering patterns that may be used to guide decision making. This study suggests cognitive factors make learned models coherent and, therefore, credible to experts. One factor that influences the acceptance of learned models is consistency with existing medical knowledge.

 
  • References

  • 1 Ohmann C, Yang Q, Moustakis V, Lang K, Elk VP. Machine learning techniques applied to the diagnosis of acute abdominal pain. In: Lecture Notes in Artificial Intelligence: Artificial Intelligence in Medicine, AIME95. Barahona P, Stefanelli M. eds. Springer 1995. 934: 276-81.
  • 2 Zelic I, Kononenko I, Lavrac N, Vuga BV. Machine learning applied to diagnosis of sport injuries. In: Lecture Notes in Artificial Intelligence: Artificial Intelligence in Medicine, AIME97. Keravnou E, Garbay C, Baud R, Wyatt J. eds. Springer; 1997. 1211: 138-44.
  • 3 Pazzani M, Kibler D. The utility of knowledge in inductive learning. Machine Learning 1992; 9 (Suppl. 01) 57-94.
  • 4 Ernst R, Hay J. The US economic and social costs of Alzheimer’s disease revisited. Am J of Public Health 1994; 84 (Suppl. 08) 1261-4.
  • 5 Folstein M, Folstein S, McHugh P. Mini-mental state – a practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research 1975; 12 (Suppl. 03) 189-98.
  • 6 Batshaw M. Mental Retardation, 40 Pediatric Clinics of North America – The Child With Developmental Disabilities. Philadelphia: W. B. Saunders Company; 1993: 507-22.
  • 7 Shankle WR, Mani S, Pazzani M, Smyth P. Detecting very early stages of dementia from normal aging with machine learning methods. Proceedings of the sixth Conference on Artificial Intelligence in Medicine. Grenoble, France: 1997
  • 8 Quinlan J. C4.5: Programs for Machine Learning. Los Altos, California: Morgan Kaufmann; 1993
  • 9 Duda R, Hart P. Pattern classification and scene analysis. New York: John Wiley & Sons; 1973
  • 10 Kononenko I. Semi-naïve Bayesian classifier. In: Proceedings of the Sixth European Working Session on Learning. Berlin: Springer; 1991: 206-19.
  • 11 Quinlan JR. Learning logical definitions from relations. Machine Learning 1990; 5: 239-66.
  • 12 Cohen W. Fast effective rule induction. In: Proceedings of the Twelfth International Conference on Machine Learning. Lake Tahoe, California: 1995
  • 13 Pazzani M, Merz C, Murphy P, Ali K, Hume T, Brunk C. Reducing Misclassification Costs. Proceedings of the Eleventh International Conference of Machine Learning, New Brunswick. Morgan Kaufmann 1994: 217-25.
  • 14 Karalic A. Producing More Comprehensible Models While Retaining Their Performance. In: Proceedings of Information, Statistics and Induction in Science. Melbourne, Australia: 1996
  • 15 Bohenec M, Bratko I. Trading Accuracy for Simplicity in Decision Trees. Machine Learning 1994; 15: 223-50.
  • 16 Dehaspe L, van Laer W, De Raedt L. Applications of a logical discovery engine. In: Proceedings of the Fourth International Workshop on Inductive Logic Programming (ILP-94). 1994
  • 17 Sleeman D, Corruble V. The Role of Knowledge in a Data Mining Algorithm. In: Proceedings of the Fourth International Workshop on Multi-Strategy Learning (MSL-98). Department of Informatics, University of Turin 1998; 1: 165-74.
  • 18 Ganascia J-G, Thomas J, Laublet P. Integrating Models of Knowledge and Machine Learning. European Conference on Machine Learning. Vienna, Austria: 1993
  • 19 Pazzani M, Brunk C. Detecting and correcting errors in rule-based expert systems: an integration of empirical and explanation-based learning. Knowledge Acquisition 1991; 3: 157-73.
  • 20 Clark P, Matwin S. Using Qualitative Models to Guide Inductive Learning. In: Proceedings of the Tenth International Conference on Machine Learning. Amherst, MA; 1993: 49-56.