Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment. (October 2017)
- Record Type:
- Journal Article
- Title:
- Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment. (October 2017)
- Main Title:
- Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment
- Authors:
- LaFlair, Geoffrey T.
Staples, Shelley - Other Names:
- Cushing Sara T. guest-editor.
- Abstract:
- Investigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013). The current study investigates the extrapolation inference for a high-stakes test of spoken English, the Michigan English Language Assessment Battery (MELAB) speaking task. This inference requires evidence that supports the inferential step from observations of what test takers can do on an assessment to what they can do in the target domain (Chapelle et al., 2008b; Kane, 2013). Typically, the extrapolation inference has been supported by evidence from a criterion measure of language ability. This study proposes an additional empirical method, namely corpus-based register analysis (Biber & Conrad, 2009), which provides a quantitative framework for examining the linguistic relationship between performance assessments and the domains to which their scores are extrapolated. This approach extends Bachman and Palmer's (2010) focus on the target language use (TLU) domain analysis in their study of assessment use arguments by providing a quantitative approach for the study of language. We first explain the connections between corpus-based register analysis and TLU analysis. Second, an investigation of the MELAB speaking task compares the language of test-taker responses to the language of academic, professional, andInvestigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013). The current study investigates the extrapolation inference for a high-stakes test of spoken English, the Michigan English Language Assessment Battery (MELAB) speaking task. This inference requires evidence that supports the inferential step from observations of what test takers can do on an assessment to what they can do in the target domain (Chapelle et al., 2008b; Kane, 2013). Typically, the extrapolation inference has been supported by evidence from a criterion measure of language ability. This study proposes an additional empirical method, namely corpus-based register analysis (Biber & Conrad, 2009), which provides a quantitative framework for examining the linguistic relationship between performance assessments and the domains to which their scores are extrapolated. This approach extends Bachman and Palmer's (2010) focus on the target language use (TLU) domain analysis in their study of assessment use arguments by providing a quantitative approach for the study of language. We first explain the connections between corpus-based register analysis and TLU analysis. Second, an investigation of the MELAB speaking task compares the language of test-taker responses to the language of academic, professional, and conversational spoken registers, or TLU domains. Additionally, the language features at different performance levels within the MELAB speaking task are investigated to determine the relationship between test takers' scores and their language use in the task. Following previous studies using corpus-based register analysis, we conduct a multi-dimensional (MD) analysis for our investigation. The comparison of the language features from the MELAB with the language of TLU domains revealed that support for the extrapolation inference varies across dimensions of language use. … (more)
- Is Part Of:
- Language testing. Volume 34:Number 4(2017)
- Journal:
- Language testing
- Issue:
- Volume 34:Number 4(2017)
- Issue Display:
- Volume 34, Issue 4 (2017)
- Year:
- 2017
- Volume:
- 34
- Issue:
- 4
- Issue Sort Value:
- 2017-0034-0004-0000
- Page Start:
- 451
- Page End:
- 475
- Publication Date:
- 2017-10
- Subjects:
- Corpus linguistics -- domain analysis -- multi-dimensional analysis -- performance assessment -- register analysis -- validity argument
Language and languages -- Ability testing -- Periodicals
Language and languages -- Examinations -- Periodicals
407.6 - Journal URLs:
- http://ltj.sagepub.com ↗
http://www.uk.sagepub.com/home.nav ↗ - DOI:
- 10.1177/0265532217713951 ↗
- Languages:
- English
- ISSNs:
- 0265-5322
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 7712.xml