Comparison of the ManyBabies 1 results to meta-analytic data
Meta-analyses are often considered to be the most reliable source of information when it comes to deciding whether a phenomenon is real, and how strong the effect is. However, large-scale collaborations, such as ManyBabies, often yield different results than published meta-analyses. To better understand how the two ways to collect and analyze large datasets are related (or not), we update the meta-analysis on infant-directed speech preference and subject it to a joint analysis.
- Data: MetaLab
We encourage everyone who is interested in the project to contribute and/or contact the project leads by e-mailing: Christina.Bergmann [at] mpi.nl, fusaroli [at] cas.au.dk
Analysis of supplemental demographic variables
The ManyBabies 1 project provides an unique opportunity not only to take stock of the field and discover how our methods and approaches differ, but to begin to understand the factors that make these effects so difficult to measure. In this ongoing exploratory project, we plan to analyze additional variables collected alongside the main MB1 project, consisting of a wide range of ‘lab factors’ that researchers believe may impact either whether a baby fusses out of a study (e.g., Research Assistant having beard), or whether they truly attend to stimuli (and thus produce an expected effect in the study).
- Materials, Protocols, and Documentation: OSF.
We encourage everyone who is interested in the project to contribute and/or contact the project leads by e-mailing: mekline [at] mit.edu
Please note that access to infants/infant lab is not a prerequisite.
Kline, M. (2018, June 8). The effect of ‘Lab Factors’ on fussout rates/latencies and infant-level and laboratory-level effect sizes.