Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute. (April 2016)
- Record Type:
- Journal Article
- Title:
- Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute. (April 2016)
- Main Title:
- Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute
- Authors:
- Bamberger, Michael
Tarsilla, Michele
Hesse-Biber, Sharlene - Abstract:
- Highlights: Many widely-used impact evaluation designs fail to capture important unintended consequences of development programs. Sometimes this is due to "real-world" budget, time, data and political constraints and pressures, and sometimes to limitations in the logic of the program design, planning and evaluation. The logic of most evaluation designs is to determine whether there is credible evidence (statistical, theory-based or narrative) that a program or policy has achieved its intended objectives. Many design logics do not permit the evaluation to assess, or even identify, outcomes or consequences that were not included in the original program design. For example, pre-test post-test comparison group designs only compare changes in one or a few outcome variables for the project and control groups. The authors' extensive experience of working with program evaluators also reveals that many evaluators are well aware of the frequent occurrence and potential seriousness of unintended outcomes but that they are often discouraged by clients from focusing on these. Clients either want evaluators to focus exclusively on the difficult task of assessing program contribution to intended outcomes, or in some cases they do not wish the evaluation to document these kinds of problems. It is argued that the ability of randomized control trials and other kinds of evaluations to detect unintended outcomes can be strengthen through incorporating a mixed-methods framework and byHighlights: Many widely-used impact evaluation designs fail to capture important unintended consequences of development programs. Sometimes this is due to "real-world" budget, time, data and political constraints and pressures, and sometimes to limitations in the logic of the program design, planning and evaluation. The logic of most evaluation designs is to determine whether there is credible evidence (statistical, theory-based or narrative) that a program or policy has achieved its intended objectives. Many design logics do not permit the evaluation to assess, or even identify, outcomes or consequences that were not included in the original program design. For example, pre-test post-test comparison group designs only compare changes in one or a few outcome variables for the project and control groups. The authors' extensive experience of working with program evaluators also reveals that many evaluators are well aware of the frequent occurrence and potential seriousness of unintended outcomes but that they are often discouraged by clients from focusing on these. Clients either want evaluators to focus exclusively on the difficult task of assessing program contribution to intended outcomes, or in some cases they do not wish the evaluation to document these kinds of problems. It is argued that the ability of randomized control trials and other kinds of evaluations to detect unintended outcomes can be strengthen through incorporating a mixed-methods framework and by strengthening the theory of change on which many evaluations are based, but which frequently fail to identify unintended outcomes (even though the theory of change logic permits the identification of these consequences). Several case studies illustrate how mixed methods have been able to strengthen the ability of RCTs to identify unintended consequences. Abstract: Many widely-used impact evaluation designs, including randomized control trials (RCTs) and quasi-experimental designs (QEDs), frequently fail to detect what are often quite serious unintended consequences of development programs. This seems surprising as experienced planners and evaluators are well aware that unintended consequences frequently occur. Most evaluation designs are intended to determine whether there is credible evidence (statistical, theory-based or narrative) that programs have achieved their intended objectives and the logic of many evaluation designs, even those that are considered the most "rigorous, " does not permit the identification of outcomes that were not specified in the program design. We take the example of RCTs as they are considered by many to be the most rigorous evaluation designs. We present a numbers of cases to illustrate how infusing RCTs with a mixed-methods approach (sometimes called an "RCT+" design) can strengthen the credibility of these designs and can also capture important unintended consequences. We provide a Mixed Methods Evaluation Framework that identifies 9 ways in which UCs can occur, and we apply this framework to two of the case studies. … (more)
- Is Part Of:
- Evaluation and program planning. Volume 55(2016:Apr.)
- Journal:
- Evaluation and program planning
- Issue:
- Volume 55(2016:Apr.)
- Issue Display:
- Volume 55 (2016)
- Year:
- 2016
- Volume:
- 55
- Issue Sort Value:
- 2016-0055-0000-0000
- Page Start:
- 155
- Page End:
- 162
- Publication Date:
- 2016-04
- Subjects:
- Mixed-methods -- Uninintended consequences -- Evaluation design -- Randomized control trials
Health planning -- Periodicals
Medical care -- Evaluation -- Periodicals
362.1068 - Journal URLs:
- http://www.sciencedirect.com/science/journal/01497189 ↗
http://www.elsevier.com/journals ↗ - DOI:
- 10.1016/j.evalprogplan.2016.01.001 ↗
- Languages:
- English
- ISSNs:
- 0149-7189
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - 3830.565000
British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 7260.xml