Normalizing acronyms and abbreviations to aid patient understanding of clinical texts: ShARe/CLEF eHealth Challenge 2013, Task 2. Issue 1 (December 2016)
- Record Type:
- Journal Article
- Title:
- Normalizing acronyms and abbreviations to aid patient understanding of clinical texts: ShARe/CLEF eHealth Challenge 2013, Task 2. Issue 1 (December 2016)
- Main Title:
- Normalizing acronyms and abbreviations to aid patient understanding of clinical texts: ShARe/CLEF eHealth Challenge 2013, Task 2
- Authors:
- Mowery, Danielle
South, Brett
Christensen, Lee
Leng, Jianwei
Peltonen, Laura-Maria
Salanterä, Sanna
Suominen, Hanna
Martinez, David
Velupillai, Sumithra
Elhadad, Noémie
Savova, Guergana
Pradhan, Sameer
Chapman, Wendy - Abstract:
- Abstract Background The ShARe/CLEF eHealth challenge lab aims to stimulate development of natural language processing and information retrieval technologies to aid patients in understanding their clinical reports. In clinical text, acronyms and abbreviations, also referenced asshort forms, can be difficult for patients to understand. For one of three shared tasks in 2013 (Task 2), we generated a reference standard of clinical short forms normalized to the Unified Medical Language System. This reference standard can be used to improve patient understanding by linking to web sources with lay descriptions of annotated short forms or by substituting short forms with a more simplified, lay term. Methods In this study, we evaluate 1) accuracy of participating systems' normalizing short forms compared to a majority sense baseline approach, 2) performance of participants' systems for short forms with variable majority sense distributions, and 3) report the accuracy of participating systems' normalizing shared normalized concepts between the test set and the Consumer Health Vocabulary, a vocabulary of lay medical terms. Results The best systems submitted by the five participating teams performed with accuracies ranging from 43 to 72 %. A majority sense baseline approach achieved the second best performance. The performance of participating systems for normalizing short forms with two or more senses with low ambiguity (majority sense greater than 80 %) ranged from 52 to 78 % accuracy,Abstract Background The ShARe/CLEF eHealth challenge lab aims to stimulate development of natural language processing and information retrieval technologies to aid patients in understanding their clinical reports. In clinical text, acronyms and abbreviations, also referenced asshort forms, can be difficult for patients to understand. For one of three shared tasks in 2013 (Task 2), we generated a reference standard of clinical short forms normalized to the Unified Medical Language System. This reference standard can be used to improve patient understanding by linking to web sources with lay descriptions of annotated short forms or by substituting short forms with a more simplified, lay term. Methods In this study, we evaluate 1) accuracy of participating systems' normalizing short forms compared to a majority sense baseline approach, 2) performance of participants' systems for short forms with variable majority sense distributions, and 3) report the accuracy of participating systems' normalizing shared normalized concepts between the test set and the Consumer Health Vocabulary, a vocabulary of lay medical terms. Results The best systems submitted by the five participating teams performed with accuracies ranging from 43 to 72 %. A majority sense baseline approach achieved the second best performance. The performance of participating systems for normalizing short forms with two or more senses with low ambiguity (majority sense greater than 80 %) ranged from 52 to 78 % accuracy, with two or more senses with moderate ambiguity (majority sense between 50 and 80 %) ranged from 23 to 57 % accuracy, and with two or more senses with high ambiguity (majority sense less than 50 %) ranged from 2 to 45 % accuracy. With respect to the ShARe test set, 69 % of short form annotations contained common concept unique identifiers with the Consumer Health Vocabulary. For these 2594 possible annotations, the performance of participating systems ranged from 50 to 75 % accuracy. Conclusion Short form normalization continues to be a challenging problem. Short form normalization systems perform with moderate to reasonable accuracies. The Consumer Health Vocabulary could enrich its knowledge base with missed concept unique identifiers from the ShARe test set to further support patient understanding of unfamiliar medical terms. … (more)
- Is Part Of:
- Journal of biomedical semantics. Volume 7:Issue 1(2016)
- Journal:
- Journal of biomedical semantics
- Issue:
- Volume 7:Issue 1(2016)
- Issue Display:
- Volume 7, Issue 1 (2016)
- Year:
- 2016
- Volume:
- 7
- Issue:
- 1
- Issue Sort Value:
- 2016-0007-0001-0000
- Page Start:
- 1
- Page End:
- 13
- Publication Date:
- 2016-12
- Subjects:
- Natural language processing -- Acronyms -- Abbreviations -- Consumer health information -- Unified Medical Language System
Semantics -- Periodicals
Medicine -- Research -- Periodicals
Biology -- Research -- Periodicals
Computer systems -- Periodicals
Bioinformatics -- Periodicals
570.285 - Journal URLs:
- http://www.jbiomedsem.com/ ↗
http://link.springer.com/ ↗ - DOI:
- 10.1186/s13326-016-0084-y ↗
- Languages:
- English
- ISSNs:
- 2041-1480
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 10192.xml