Navigation auf uzh.ch
Find our publications on the Zurich Open Repository and Archive (ZORA).
CRS develops novel methodology related to reproducibility and replicability, and contributes to the improvement the quality of scientific investigation using meta-science.
See below for a summary of our research projects.
Power Priors for Replication Studies (July 2022)
A Statistical Framework for Replicability (July 2022)
The replication of non-inferiority and equivalence studies (April 2022)
The sceptical Bayes factor for the assessment of replication success (September 2020)
Discussion on the meeting on ‘Signs and sizes: understanding and replicating statistical findings’ (February 2020)
A New Standard for the Analysis and Design of Replication Studies (December 2019)
Rachel Heyard
This meta-science project intends to improve the process employed at funding agencies to allocate research funding to the most deserving researchers, by making it less biassed, more transparent and more reliable. A first output of the work (produced while R. Heyard was employed at the SNSF) is the Bayesian Ranking methodology presented in the paper “Rethinking the Funding Line at the Swiss National Science Foundation: Bayesian Ranking and Lottery”.
Benjamin Ineichen, Eva Furrer, Servan Grüninger, Malcolm MacLeod
Bench-to-bedside translation, e.g. eventual market approval of a therapy tested in animals, is considered to be very low in animal research (Bespalov et al., 2016). However, although bench-to-bedside translation has been assessed in certain biomedicine fields, no comprehensive analysis has investigated the extent of successful translation within different biomedicine subfields. In addition, these rates have not been systematically compared with translational rates in animal free research fields. Thus, we aim at systematically evaluating bench-to-bedside translation in biomedicine based on animal meta studies and compare the extent of bench-to-bedside translation to preclinical research fields without the use of animal experiments.
Samuel Pawel, Lucas Kook, Kelly Reeve
Computer simulation experiments help researchers learn how statistical methods work. As with any experiment, the success of this approach hinges on the quality of the experimental design, execution, and reporting. In this project, we investigated how certain questionable research practices (e.g., selective reporting) can affect the results of simulation studies. We show how easy it is to make a method appear superior to competing methods when questionable research practices are employed. We provide suggestions for researchers, peer reviewers, and other academic stakeholders to address these issues, such as pre-registering simulation protocols or sharing code and data.
Paper in the Biometrical Journal
Simon Schwab, Giuachin Kreiliger, Leonhard Held
Registered study protocol and preprint are available. The project received funding from the StwF (STWF-19-007).
Simon Schwab, Audrey Yeo
This project seeks to assess the use of analysis of covariance (ANCOVA) in various scientific fields. The work considers publications from the fields neuroscience, medicine, and psychology, as well as certain interdisciplinary subjects. Reanalysis of the original work was done to determine whether ANCOVA has been utilised appropriately. See the OSF repository of the project.
Eva Furrer, Michael Hediger, Ulrike Held, Klaus Steigmiller
Presentation of the protocol at the Conference of the Austro-Swiss Region of the International Biometric Society in September 2019: A meta-research study on quality and impact of biostatisticians in health research
Registered report at Plos One: Is reporting quality in medical publications associated with biostatisticians as co-authors? A registered report protocol
Results of the registered report at Plos One: The incremental value of the contribution of a biostatistician to the reporting quality in health research—A retrospective, single center, observational cohort study
You can subscribe to our mailing list here.