Evaluating topic model interpretability from a primary care physician perspective. Issue 124 (February 2016)
- Record Type:
- Journal Article
- Title:
- Evaluating topic model interpretability from a primary care physician perspective. Issue 124 (February 2016)
- Main Title:
- Evaluating topic model interpretability from a primary care physician perspective
- Authors:
- Arnold, Corey W.
Oh, Andrea
Chen, Shawn
Speier, William - Abstract:
- Highlights: A topic model with three different parameter settings is fit to a large collection of clinical reports. The interpretability of discovered topics is evaluated by clinicians and laypersons. Clinicians are significantly more capable of interpreting topics than laypersons. Topics hold potential for applications in automatic summarization. Abstract: Background and objective: Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Methods: Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann–Whitney U tests for each of the tasks. Results: While the 150-topic model produced the best log likelihood, participants wereHighlights: A topic model with three different parameter settings is fit to a large collection of clinical reports. The interpretability of discovered topics is evaluated by clinicians and laypersons. Clinicians are significantly more capable of interpreting topics than laypersons. Topics hold potential for applications in automatic summarization. Abstract: Background and objective: Probabilistic topic models provide an unsupervised method for analyzing unstructured text. These models discover semantically coherent combinations of words (topics) that could be integrated in a clinical automatic summarization system for primary care physicians performing chart review. However, the human interpretability of topics discovered from clinical reports is unknown. Our objective is to assess the coherence of topics and their ability to represent the contents of clinical reports from a primary care physician's point of view. Methods: Three latent Dirichlet allocation models (50 topics, 100 topics, and 150 topics) were fit to a large collection of clinical reports. Topics were manually evaluated by primary care physicians and graduate students. Wilcoxon Signed-Rank Tests for Paired Samples were used to evaluate differences between different topic models, while differences in performance between students and primary care physicians (PCPs) were tested using Mann–Whitney U tests for each of the tasks. Results: While the 150-topic model produced the best log likelihood, participants were most accurate at identifying words that did not belong in topics learned by the 100-topic model, suggesting that 100 topics provides better relative granularity of discovered semantic themes for the data set used in this study. Models were comparable in their ability to represent the contents of documents. Primary care physicians significantly outperformed students in both tasks. Conclusion: This work establishes a baseline of interpretability for topic models trained with clinical reports, and provides insights on the appropriateness of using topic models for informatics applications. Our results indicate that PCPs find discovered topics more coherent and representative of clinical reports relative to students, warranting further research into their use for automatic summarization. … (more)
- Is Part Of:
- Computer methods and programs in biomedicine. Issue 124(2016)
- Journal:
- Computer methods and programs in biomedicine
- Issue:
- Issue 124(2016)
- Issue Display:
- Volume 124, Issue 124 (2016)
- Year:
- 2016
- Volume:
- 124
- Issue:
- 124
- Issue Sort Value:
- 2016-0124-0124-0000
- Page Start:
- 67
- Page End:
- 75
- Publication Date:
- 2016-02
- Subjects:
- Topic modeling -- Primary care -- Clinical reports
Medicine -- Computer programs -- Periodicals
Biology -- Computer programs -- Periodicals
Computers -- Periodicals
Medicine -- Periodicals
Médecine -- Logiciels -- Périodiques
Biologie -- Logiciels -- Périodiques
Biology -- Computer programs
Medicine -- Computer programs
Periodicals
Electronic journals
610.28 - Journal URLs:
- http://www.sciencedirect.com/science/journal/01692607 ↗
http://www.elsevier.com/journals ↗ - DOI:
- 10.1016/j.cmpb.2015.10.014 ↗
- Languages:
- English
- ISSNs:
- 0169-2607
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - 3394.095000
British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 2432.xml