Subscribe to RSS
DOI: 10.1055/s-0038-1634424
Evaluation of a Method that Supports Pathology Report Coding
Publication History
Received
12 July 2000
Accepted
22 March 2001
Publication Date:
08 February 2018 (online)
Summary
Objectives: The paper focuses on the problem of adequately coding pathology reports using SNOMED. Both the agreement between pathologists in coding and the quality of a system that supports pathologists in coding pathology reports were evaluated.
Methods: Six sets of three pathologists each received a different set of 40 pathology reports. Five different SNOMED code lines accompanied each pathology report. Three pathologists evaluated the correctness of each of these code lines. Kappa values and values for the reliability coefficients were determined to gain insight in the variance observed when coding pathology reports. The system that is evaluated compares a newly entered report, represented as a multi-dimensional word vector, with reports in a library, represented in the same way. The reports in the library are already coded. The system presents the code lines belonging to the five library reports most similar to the newly entered one to the pathologist in this way supporting the pathologist in determining the correct codes. A high similarity between two reports is indicated by a large value of the inproduct of the vector of the newly entered report and the vector of a report in the library.
Results: Agreement between pathologists in coding was fair (average kappa of 0.44). The reliability coefficient varied from 0.81 to 0.89 for the six sets of pathology reports. The system gave correct suggestions in 50% of the reports. In another 30% it was helpful for the pathologists.
Conclusions: On the basis of the level of the reliability coefficients it could be concluded that three pathologists are indeed sufficient for obtaining a gold standard for evaluating the system. The method used for comparing reports is not strong enough to allow fully automatic coding. It could be shown that the system induces a more uniform coding by pathologists. An evaluation of the incorrect suggestions of the system indicates that the performance of the system can still be improved.
-
References
- 1 Hall PA, Lemoine NR. Comparison of manual data coding errors in two hospitals. J Clin Path 1986; 39: 622-6.
- 2 Wilhelm WW, Nap M. From data to concept management in health care reports. Is there a need for it?. Int J Biomed Comp 1996; 42: 103-9.
- 3 Sparck Jones K. Index term weighting. Information storage and retrieval 1973; 9: 619-33.
- 4 Hersch WR. Information Retrieval – a health care perspective. New York: Springer Verlag; 1996
- 5 de Bruijn LM, Hasman A, Arends JW. Automatic SNOMED classification – a corpus-based method. Comp Meth Prog Biomed 1997; 54: 115-22.
- 6 de Bruijn LM, Hasman A, Arends JW. Automatic coding of diagnostic reports. Method Inform Med 1998; 17: 260-5.
- 7 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Bio-metrics 1979; 33: 159-74.
- 8 Friedman CP, Wyatt JC. Evaluation methods in medical informatics. New York: Springer; 1997
- 9 Hripcsak G, Kuperman GJ, Friedman C, Heitjan DF. A reliability study for evaluating information extraction from radiology reports. JAMIA 1999; 6: 143-50.
- 10 Sager N, Bross ID, Story G, Bastedo P, Marsh E, Shedd D. Automatic encoding of clinical narrative. Comput Biol Med 1982; 12: 43-56.
- 11 Brigl B, Mieth M, Haux R, Gluck E. The LBI method for automated indexing of diagnoses using SNOMED. Part 2: Evaluation. Int J Biomed Comput 1995; 38: 101-8.
- 12 Spyns P, De Moor G. A Dutch medical language Processor. Int J Biomed Comput 1996; 41: 181-205.
- 13 Croft WB. Knowledge-based and statistical approaches to text retrieval. IEEE Expert. 1993: 8-12.