0315 Inter- And Intra-expert Variability In Sleep Scoring: Comparison Between Visual And Automatic Analysis. (27th April 2018)
- Record Type:
- Journal Article
- Title:
- 0315 Inter- And Intra-expert Variability In Sleep Scoring: Comparison Between Visual And Automatic Analysis. (27th April 2018)
- Main Title:
- 0315 Inter- And Intra-expert Variability In Sleep Scoring: Comparison Between Visual And Automatic Analysis
- Authors:
- Muto, V
Berthomier, C
Schmidt, C
Vandewalle, G
Jaspar, M
Devillers, J
Chellappa, S
Meyer, C
Phillips, C
Berthomier, P
Prado, J
Benoit, O
Brandewinder, M
Mattout, J
Maquet, P - Abstract:
- Abstract: Introduction: Visual sleep scoring (VS) is affected by inter-expert (difference in scoring between several scorers working on the same recording) and intra-expert variability (evolution in the way to score of a given expert when compared with a reference). Our aim was to quantify inter and intra-expert sleep scoring variability in a group of 6 experts -working at the same sleep center and trained to homogenize their sleep scoring- by using the validated automatic scoring (AS) algorithm ASEEGA, which is fully reproducible by design, as a reference. Methods: Data were collected in 24 healthy young male participants (mean age 21.6 ± 2.5 years). 4 recordings (data set 1, DS1) were scored by the 6 experts (24 visual scorings) according to the AASM criteria, and by AS, which is based on the analysis of the single EEG channel Cz-Pz. Other 88 recordings (DS2) were scored a few weeks later by the same experts (88 visual scorings) and AS. The epoch-by-epoch agreements (concordance and Cohen kappa coefficient) were computed between all VS, and between VS and AS. Results: Inter-expert agreement on DS1 decreased as the number of experts increased, from 86% for mean pairwise agreement down to 69% for all 6 experts. Adding AS to the pool of experts barely changed the kappa value, from 0.81 to 0.79. A systematic decrease of the agreements was observed between AS and each single expert between DS1 and DS2 (-3.7% on average). Conclusion: Inter-expert differences are not restrictedAbstract: Introduction: Visual sleep scoring (VS) is affected by inter-expert (difference in scoring between several scorers working on the same recording) and intra-expert variability (evolution in the way to score of a given expert when compared with a reference). Our aim was to quantify inter and intra-expert sleep scoring variability in a group of 6 experts -working at the same sleep center and trained to homogenize their sleep scoring- by using the validated automatic scoring (AS) algorithm ASEEGA, which is fully reproducible by design, as a reference. Methods: Data were collected in 24 healthy young male participants (mean age 21.6 ± 2.5 years). 4 recordings (data set 1, DS1) were scored by the 6 experts (24 visual scorings) according to the AASM criteria, and by AS, which is based on the analysis of the single EEG channel Cz-Pz. Other 88 recordings (DS2) were scored a few weeks later by the same experts (88 visual scorings) and AS. The epoch-by-epoch agreements (concordance and Cohen kappa coefficient) were computed between all VS, and between VS and AS. Results: Inter-expert agreement on DS1 decreased as the number of experts increased, from 86% for mean pairwise agreement down to 69% for all 6 experts. Adding AS to the pool of experts barely changed the kappa value, from 0.81 to 0.79. A systematic decrease of the agreements was observed between AS and each single expert between DS1 and DS2 (-3.7% on average). Conclusion: Inter-expert differences are not restricted to a small proportion of specific epochs that are difficult to score, even when the expert team is very homogeneous. Intra-expert variability is highlighted by the systematic agreement decrease across datasets, and can be interpreted as a scoring drift over time. Even if autoscoring neither provides any ground truth, nor can improve the inter-scorer agreement, it can efficiently cope with the intra-scorer variability, when the AS used is perfectly reproducible and largely insensitive to experimental conditions. These properties are mandatory when dealing with large dataset, making autoscoring methods a sensible option. Support (If Any): None. … (more)
- Is Part Of:
- Sleep. Volume 41(2018)Supplement 1
- Journal:
- Sleep
- Issue:
- Volume 41(2018)Supplement 1
- Issue Display:
- Volume 41, Issue 1 (2018)
- Year:
- 2018
- Volume:
- 41
- Issue:
- 1
- Issue Sort Value:
- 2018-0041-0001-0000
- Page Start:
- A121
- Page End:
- A121
- Publication Date:
- 2018-04-27
- Subjects:
- Sleep -- Physiological aspects -- Periodicals
Sleep disorders -- Periodicals
Sommeil -- Aspect physiologique -- Périodiques
Sommeil, Troubles du -- Périodiques
Sleep disorders
Sleep -- Physiological aspects
Sleep -- physiological aspects
Sleep Wake Disorders
Psychophysiology
Electronic journals
Periodicals
616.8498 - Journal URLs:
- http://bibpurl.oclc.org/web/21399 ↗
http://www.journalsleep.org/ ↗
https://academic.oup.com/sleep ↗
http://www.oxfordjournals.org/ ↗
http://www.pubmedcentral.nih.gov/tocrender.fcgi?journal=369&action=archive ↗ - DOI:
- 10.1093/sleep/zsy061.314 ↗
- Languages:
- English
- ISSNs:
- 0161-8105
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 12265.xml