Choosing what to replicate - Efficient study selection in the modern age.
I recently had an opportunity to present some of my recent work on replication value for the Rotterdam RIOTS club. Because COVID-19 is still a concern the talk was held online, and due to the amazing organizing skills of the local RIOTS committee, the talk and question round was recorded and put up on Youtube. Akk views expressed in this talk are mine alone. The writing and discussion process for the related paper was very much ongoing at the time I presented my talk, and I do not want to suggest that my collaborators would agree with all elements of my talk. The talk contains a few errors/blunders. (1) At several points I talk in terms of “my model”, “my assumptions”, etc. This should in no way be taken to mean that the work I am presenting is mine alone. This work has been a team effort all the way, and much of the theoretical development has either been developed by, or inspired by, the work of several collaborators and colleagues. (2) Around 50:20 I say that “uncertainty could still be low”, but I obviously mean “uncertainty could still be HIGH”. (3) Correction of my answer to the question “what if you do a replication study that increases uncertainty”; Specifically, the model assumes that all we want is to be right about the claim (if the claim is true we want to believe it is true. If it is false we want to believe it is false). Suppose that a claim is really false, but there is an original study that indicates that the claim is true. Then, even if we conduct a replication that fails to replicate the original study, we have still reduced our uncertainty about the truth of the claim. I.e. we are closer to believing that the claim is false (which is true) after replication. However, the current definition of uncertainty in the model does not elegantly incorporate the problem of false/misplaced certainty.