Inter-rater reliability of the QuIS as an assessment of the quality of staff-inpatient interactions. Issue 1 (December 2016)
- Record Type:
- Journal Article
- Title:
- Inter-rater reliability of the QuIS as an assessment of the quality of staff-inpatient interactions. Issue 1 (December 2016)
- Main Title:
- Inter-rater reliability of the QuIS as an assessment of the quality of staff-inpatient interactions
- Authors:
- Mesa-Eguiagaray, Ines
Böhning, Dankmar
McLean, Chris
Griffiths, Peter
Bridges, Jackie
Pickering, Ruth - Abstract:
- Abstract Background Recent studies of the quality of in-hospital care have used the Quality of Interaction Schedule (QuIS) to rate interactions observed between staff and inpatients in a variety of ward conditions. The QuIS was developed and evaluated in nursing and residential care. We set out to develop methodology for summarising information from inter-rater reliability studies of the QuIS in the acute hospital setting. Methods Staff-inpatient interactions were rated by trained staff observing care delivered during two-hour observation periods. Anticipating the possibility of the quality of care varying depending on ward conditions, we selected wards and times of day to reflect the variety of daytime care delivered to patients. We estimated inter-rater reliability using weighted kappa, κw, combined over observation periods to produce an overall, summary estimate, κ ^ w $$ {\widehat{\upkappa}}_w $$ . Weighting schemes putting different emphasis on the severity of misclassification between QuIS categories were compared, as were different methods of combining observation period specific estimates. Results Estimated κ ^ w $$ {\widehat{\upkappa}}_w $$ did not vary greatly depending on the weighting scheme employed, but we found simple averaging of estimates across observation periods to produce a higher value of inter-rater reliability due to over-weighting observation periods with fewest interactions. Conclusions We recommend that researchers evaluating the inter-raterAbstract Background Recent studies of the quality of in-hospital care have used the Quality of Interaction Schedule (QuIS) to rate interactions observed between staff and inpatients in a variety of ward conditions. The QuIS was developed and evaluated in nursing and residential care. We set out to develop methodology for summarising information from inter-rater reliability studies of the QuIS in the acute hospital setting. Methods Staff-inpatient interactions were rated by trained staff observing care delivered during two-hour observation periods. Anticipating the possibility of the quality of care varying depending on ward conditions, we selected wards and times of day to reflect the variety of daytime care delivered to patients. We estimated inter-rater reliability using weighted kappa, κw, combined over observation periods to produce an overall, summary estimate, κ ^ w $$ {\widehat{\upkappa}}_w $$ . Weighting schemes putting different emphasis on the severity of misclassification between QuIS categories were compared, as were different methods of combining observation period specific estimates. Results Estimated κ ^ w $$ {\widehat{\upkappa}}_w $$ did not vary greatly depending on the weighting scheme employed, but we found simple averaging of estimates across observation periods to produce a higher value of inter-rater reliability due to over-weighting observation periods with fewest interactions. Conclusions We recommend that researchers evaluating the inter-rater reliability of the QuIS by observing staff-inpatient interactions during observation periods representing the variety of ward conditions in which care takes place, should summarise inter-rater reliability by κw, weighted according to our scheme A4. Observation period specific estimates should be combined into an overall, single summary statistic κ ^ w random $$ {\widehat{\upkappa}}_{w\ random} $$, using a random effects approach, with κ ^ w random $$ {\widehat{\upkappa}}_{w\ random} $$, to be interpreted as the mean of the distribution of κw across the variety of ward conditions. We draw attention to issues in the analysis and interpretation of inter-rater reliability studies incorporating distinct phases of data collection that may generalise more widely. … (more)
- Is Part Of:
- BMC medical research methodology. Volume 16:Issue 1(2016)
- Journal:
- BMC medical research methodology
- Issue:
- Volume 16:Issue 1(2016)
- Issue Display:
- Volume 16, Issue 1 (2016)
- Year:
- 2016
- Volume:
- 16
- Issue:
- 1
- Issue Sort Value:
- 2016-0016-0001-0000
- Page Start:
- 1
- Page End:
- 12
- Publication Date:
- 2016-12
- Subjects:
- Weighted kappa -- Random effects meta-analysis -- QuIS -- Collapsing -- Averaging
Medicine -- Research -- Methodology -- Periodicals
610.72 - Journal URLs:
- http://www.biomedcentral.com/bmcmedresmethodol/ ↗
http://www.pubmedcentral.nih.gov/tocrender.fcgi?journal=43 ↗
http://link.springer.com/ ↗ - DOI:
- 10.1186/s12874-016-0266-4 ↗
- Languages:
- English
- ISSNs:
- 1471-2288
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library STI - ELD Digital store - Ingest File:
- 10045.xml