Science rarely produces identical outcomes. Mistaking this for failure turns caution into an excuse for inaction

A

new set of studies out this month suggests that as many as half of all results published in reputable journals in the social sciences can’t be replicated by independent analysis. This is part of a long-running problem across many research fields – most visibly in the social sciences and psychology, though concerns have also been raised in areas of biomedical research.

The latest work is a seven-year project called Systematizing Confidence in Open Research and Evidence (Score), which has now published three studies looking at 3,900 social science papers. It found that newer papers, and those published in journals requiring extensive sharing of underlying data, were more likely to be reproduced. Separately, medical research faces its own constraints: differing patient caseloads and limited sample sizes mean that, in practice, it can resemble the social sciences more than laboratory physics. Clearly, policymakers should be cautious of any claims that don’t have a wide and robust base of evidence.

Language is key: reproducibility looks at whether results can be recreated from the same data and methods. Replication tests whether the finding holds for new data in different contexts. Science rarely produces exactly identical outcomes, and figuring out why is part of how knowledge accumulates. But increasingly, politicians have looked to turn uncertainty into denial and recast normal scientific uncertainty as evidence of failure. That is why a White House executive order in May 2025 emphasised the “reproducibility crisis” in science, essentially a Trumpian call for doubt and inaction.