Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability. Issue 3 (September 2020)
- Record Type:
- Journal Article
- Title:
- Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability. Issue 3 (September 2020)
- Main Title:
- Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability
- Authors:
- Ebersole, Charles R.
Mathur, Maya B.
Baranski, Erica
Bart-Plange, Diane-Jo
Buttrick, Nicholas R.
Chartier, Christopher R.
Corker, Katherine S.
Corley, Martin
Hartshorne, Joshua K.
IJzerman, Hans
Lazarević, Ljiljana B.
Rabagliati, Hugh
Ropovik, Ivan
Aczel, Balazs
Aeschbach, Lena F.
Andrighetto, Luca
Arnal, Jack D.
Arrow, Holly
Babincak, Peter
Bakos, Bence E.
Baník, Gabriel
Baskin, Ernest
Belopavlović, Radomir
Bernstein, Michael H.
Białek, Michał
Bloxsom, Nicholas G.
Bodroža, Bojana
Bonfiglio, Diane B. V.
Boucher, Leanne
Brühlmann, Florian
Brumbaugh, Claudia C.
Casini, Erica
Chen, Yiling
Chiorri, Carlo
Chopik, William J.
Christ, Oliver
Ciunci, Antonia M.
Claypool, Heather M.
Coary, Sean
Čolić, Marija V.
Collins, W. Matthew
Curran, Paul G.
Day, Chris R.
Dering, Benjamin
Dreber, Anna
Edlund, John E.
Falcão, Filipe
Fedor, Anna
Feinberg, Lily
Ferguson, Ian R.
Ford, Máire
Frank, Michael C.
Fryberger, Emily
Garinther, Alexander
Gawryluk, Katarzyna
Ashbaugh, Kayla
Giacomantonio, Mauro
Giessner, Steffen R.
Grahe, Jon E.
Guadagno, Rosanna E.
Hałasa, Ewa
Hancock, Peter J. B.
Hilliard, Rias A.
Hüffmeier, Joachim
Hughes, Sean
Idzikowska, Katarzyna
Inzlicht, Michael
Jern, Alan
Jiménez-Leal, William
Johannesson, Magnus
Joy-Gaba, Jennifer A.
Kauff, Mathias
Kellier, Danielle J.
Kessinger, Grecia
Kidwell, Mallory C.
Kimbrough, Amanda M.
King, Josiah P. J.
Kolb, Vanessa S.
Kołodziej, Sabina
Kovacs, Marton
Krasuska, Karolina
Kraus, Sue
Krueger, Lacy E.
Kuchno, Katarzyna
Lage, Caio Ambrosio
Langford, Eleanor V.
Levitan, Carmel A.
de Lima, Tiago Jessé Souza
Lin, Hause
Lins, Samuel
Loy, Jia E.
Manfredi, Dylan
Markiewicz, Łukasz
Menon, Madhavi
Mercier, Brett
Metzger, Mitchell
Meyet, Venus
Millen, Ailsa E.
Miller, Jeremy K.
Montealegre, Andres
Moore, Don A.
Muda, Rafał
Nave, Gideon
Nichols, Austin Lee
Novak, Sarah A.
Nunnally, Christian
Orlić, Ana
Palinkas, Anna
Panno, Angelo
Parks, Kimberly P.
Pedović, Ivana
Pękala, Emilian
Penner, Matthew R.
Pessers, Sebastiaan
Petrović, Boban
Pfeiffer, Thomas
Pieńkosz, Damian
Preti, Emanuele
Purić, Danka
Ramos, Tiago
Ravid, Jonathan
Razza, Timothy S.
Rentzsch, Katrin
Richetin, Juliette
Rife, Sean C.
Rosa, Anna Dalla
Rudy, Kaylis Hase
Salamon, Janos
Saunders, Blair
Sawicki, Przemysław
Schmidt, Kathleen
Schuepfer, Kurt
Schultze, Thomas
Schulz-Hardt, Stefan
Schütz, Astrid
Shabazian, Ani N.
Shubella, Rachel L.
Siegel, Adam
Silva, Rúben
Sioma, Barbara
Skorb, Lauren
de Souza, Luana Elayne Cunha
Steegen, Sara
Stein, L. A. R.
Sternglanz, R. Weylin
Stojilović, Darko
Storage, Daniel
Sullivan, Gavin Brent
Szaszi, Barnabas
Szecsi, Peter
Szöke, Orsolya
Szuts, Attila
Thomae, Manuela
Tidwell, Natasha D.
Tocco, Carly
Torka, Ann-Kathrin
Tuerlinckx, Francis
Vanpaemel, Wolf
Vaughn, Leigh Ann
Vianello, Michelangelo
Viganola, Domenico
Vlachou, Maria
Walker, Ryan J.
Weissgerber, Sophia C.
Wichman, Aaron L.
Wiggins, Bradford J.
Wolf, Daniel
Wood, Michael J.
Zealley, David
Žeželj, Iris
Zrubka, Mark
Nosek, Brian A.
… (more) - Abstract:
- Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect ( p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1, 279.5, range = 276–3, 512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δ r = .002 or .014, depending on analyticReplication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect ( p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1, 279.5, range = 276–3, 512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δ r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols ( r = .05) was similar to that of the RP:P protocols ( r = .04) and the original RP:P replications ( r = .11), and smaller than that of the original studies ( r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50). … (more)
- Is Part Of:
- Advances in methods and practices in psychological science. Volume 3:Issue 3(2020)
- Journal:
- Advances in methods and practices in psychological science
- Issue:
- Volume 3:Issue 3(2020)
- Issue Display:
- Volume 3, Issue 3 (2020)
- Year:
- 2020
- Volume:
- 3
- Issue:
- 3
- Issue Sort Value:
- 2020-0003-0003-0000
- Page Start:
- 309
- Page End:
- 331
- Publication Date:
- 2020-09
- Subjects:
- replication -- reproducibility -- metascience -- peer review -- Registered Reports -- open data -- preregistered
Psychology -- Periodicals
Psychology -- Research -- Periodicals
150 - Journal URLs:
- http://journals.sagepub.com/loi/ampa ↗
http://www.sagepublications.com/ ↗ - DOI:
- 10.1177/2515245920958687 ↗
- Languages:
- English
- ISSNs:
- 2515-2459
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 14327.xml