A Collective Approach to Improving Scientific Rigor
- Featured in:
- Foundations of Rigorous Neuroscience Research
How can we improve the quality of our research? We are all interested in discovering new insights — whatever our discipline or specific focus. We also know that a single finding is never definitive, and there are many factors that can unintentionally lead us to results that are not robust — the emphasis on discovery, the pressure to publish and win grants, and our essential enthusiasm for our work.
As a psychologist, I am interested in the mechanisms that can unconsciously lead us astray. I think there are two broad mechanisms:
- Cognitive biases. We are all human. However well trained we are, we can be led astray by these. Our tendency to see patterns in noisy data and find what we’re looking for (also known as confirmation bias) means that we can easily over-interpret our results. How can we protect ourselves against these unconscious biases?
- Incentive structures. All of us respond to rewards, and (unconsciously) our behavior will be shaped by these. We are rewarded – in terms of our careers – with publications, particularly in certain journals, and grants. But is what is good for us as scientists and our careers good for science?
If we are not careful, these two mechanisms can conspire to undermine the quality of the research that we produce.
Enthusiasm to make discoveries, the unconscious tendency to over-interpret our data to confirm what we believe to be true, the incentive to publish our work in prestigious journals (which are more likely to be interested in novel discoveries) — all of this can all lead us astray. We need to actively work to protect ourselves against the subtle impact of these different pressures, so that we achieve what truly motivates us — advancing knowledge.
How can we target these two mechanisms?
The challenge is that the current research ecosystem — which has evolved organically over decades, if not centuries — is complex and inter-connected. Researchers themselves comprise only one part; funders, publishers, institutions, learned societies and more all play a role. Action by any single actor in that system will have a modest impact at best. We need a coordinated approach if we are going to achieve real change.
This was the motivation behind the establishment of the UK Reproducibility Network – an academic collaboration that brings together grassroots communities of researchers (“local networks”), institutions, and organisations (funders, publishers, etc.). The structure supports collaboration and coordination – both within and between those three different groups. Recently, similar Reproducibility Networks have emerged in other countries.
Many of these collaborations are fostering genuinely innovative approaches to reforming scientific practice. For example, we are supporting a partnership between Cancer Research UK – a major biomedical funder – and a range of journals published by Springer-Nature, PLOS and Wiley, to incentivize uptake of the Registered Reports publishing format and streamline the process from funding to publication.
But what can individual researchers do?
I often hear from researchers, particularly early career researchers, that they want to ensure their work is robust. That’s why they enter science, after all. But they feel unsure how to make changes themselves, either in terms of their own research practice or to the wider research ecosystem. They are also often concerned that they may render themselves uncompetitive in terms of grants or the job market if they do.
I think those concerns, while understandable, are misplaced.
One way in which researchers can make changes themselves is by adopting open research practices. This can take many forms — pre-registering a study protocol, sharing data and code, posting a preprint — and is almost entirely within the control of the researcher. Many platforms, such as the Open Science Framework, exist that support open research practices.
Why should we encourage uptake of open research practices?
There are several reasons, but in terms of promoting robustness, I have argued that transparency can serve as a quality control mechanism. Research objects (e.g., data and code) that are made public can be checked by others, and errors detected. Perhaps more importantly, it creates an incentive to ensure those objects have been thoroughly checked before being deposited.
Adopting open research practices may feel intimidating, which is why the UK Reproducibility Network has produced a series of short primers to introduce a range of topics. They don’t all have to be adopted at once either – researchers can simply pick one that feels closest to their existing practice. Over time, other practices can be adopted, and gradually these will just be a routine part of how people work.
But what else can we do?
Effective social change is often driven by collection action – communities of like-minded individuals working together. Several initiatives now exist that allow researchers to form communities, physical or virtual, focused on research rigor. The ReproducibiliTea journal club and RIOT Science Club seminar series formats are examples of these – both created by early career researchers and easily implemented locally.
We can therefore act both individually and collectively. Open research practices allow us to recognize and demonstrate granular contributions to the research process — data, code, protocols etc., all of which can be listed on our CVs. Collective activities like ReproducibiliTea and RIOT Science Club can help us form local communities focused on change. In combination, this can lead to real change.