Is poor reporting of animal research hindering scientific progress?

The NC3Rs champions the importance of adequate and accurate reporting of studies involving animal models. It is only with sufficient reporting that scientists can fully understand the procedures performed in an experiment, and whether they have clear benefit. Emily Sena, NC3Rs grant holder and member of the UK CAMARADES centre at Edinburgh University, discusses the consequences of poor reporting on the assessment of the science conducted and the translation of animal data.

Drug development usually involves testing the efficacy of the drug in an animal model of the target disease. The theory goes that if the treatment improves outcome in these animals then the next step is to test in humans in a clinical trial, with the expectation that important improvements in outcome will again be seen. The neurosciences, and stroke research in particular, have become the poster child for our inability to replicate findings from animal studies in human clinical trials. The dogma is that everything works in animals but nothing works in humans.

If this ‘ideal’ research model is going to bear fruit we need to be sure that (i) the animal model replicates the human disease sufficiently well for effects to be relevant to human disease; and (ii) the effects observed are due to the therapy being tested, rather than to bias in the conduct or reporting of those animal experiments.

In the CAMARADES group (photo of some of the group members below), we have done quite a lot of work exploring the risk of bias, and rigour, of experiments reported in the scientific literature. There’s now reasonably convincing empirical evidence which shows that, in the past at least, experiments may not have been performed with sufficient rigour. I’m keen that this knowledge is not used as a stick with which to beat scientists, but rather as a driver for improvements in research practice that will increase our chances of developing effective treatments for human diseases.

Our methodological approach consists of two elements. In Systematic review, we search for all the existing literature on a given topic; and in meta-analysis, we use statistical techniques to pool together all the identified data to give an overall average estimate of treatment effect. A particularly useful feature of meta-analysis is that it also allows us to estimate the effects of experimental design features on the effects we observe.

Many of you will be aware of the placebo effect; that the knowledge of one’s treatment results in a mutually reinforcing effect from both the patient and clinician that results in improvement or sometimes even cure, independent of whether the treatment actually works. In clinical trials measures to reduce such bias – double-blinding and randomisation – are routine. Whilst it is clear that a treated animal is unlikely to contribute to a mutually reinforcing effect, the researcher performing an experiment can provide enough bias substantially to inflate treatment effects. This may be either through an effect of their behaviours on modulating animal performance or through an effect of their expectations on modulating their measurement of outcome.

For example, in one of our reviews we looked at studies administering the drug NXY-059, also known as Cerovive, in animal models of ischaemic stroke. This drug had previously been tested in a large clinical trial that included over 3,000 stroke patients but did not improve their outcome. Overall, NXY-059 appeared to improve outcome in the animals. However, closer inspection showed only a few of these studies reported blinding and randomisation. In these higher quality studies NXY-059 was reported to be 30% less effective than in studies that were not randomised or blinded. Further, no study reported both randomisation and blinding. Common comorbidities in individuals who suffer a stroke are older age and the presence of hypertension. In this review, all the animals were young and the majority had normal blood pressure. The few studies with hypertensive animals again showed treatment effects around 30% less than those using the young healthy animals. In retrospect it seems self-evident, to even the least sober-minded of us, that a drug that is 30% more effective in young, healthy animals in experiments at risk of bias is substantially less likely to work in older, hypertensive patients in a randomised, double-blinded controlled clinical trial.

In this unwieldy world of in vivo research meta-analysis can provide empirical evidence of the importance of rigorous experimental design. And scientists like empirical evidence. Sobriety and hindsight are not yet quite enough.

Going back to ideals, from a methodological point-of-view, if all research articles adhered to the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines the process of systematic review and meta-analysis would be more efficient and frankly much quicker. If titles were an “accurate and concise description of the content of the article” screening articles during systematic review would not require us to read as many abstracts or full-texts as we are currently forced to do. Setting aside making my own life easier, improving reporting to allow us to understand the procedures performed in an experiment has clear benefit.

In many of our reviews we are currently troubled by having to code many variables as ‘unknown’ and inferences cannot be made from such data. In essence, a meta-analysis is only as good as the data that go into it. The function of a research article, as I understand it, is to disseminate research findings to further the progress of science. Clear understanding of why and how experimental procedures are executed is vital to this process.

A common critique of our methodology is that authors may have taken measures to reduce bias but simply do not report them. Of course this is possible, and maybe some other factor is causing the fundamental differences we consistently observe between high and low quality studies. But improvements in reporting will address these issues beyond reasonable doubt.

I suspect better reporting will probably lead to better science. In my view fewer therapies would work in animals if the experiments were performed properly. The gambler in me would back the hunch that taking fewer, well investigated drugs to clinical trial will increase our rates of developing effective therapies.

The NC3Rs website has a number of pages and resources on experimental design and statistical analysis and our ARRIVE guidelines are now available in a handy, pocket sized format. To request copies, please fill out the online request form.

 



Log in or register to post comments

Keep in touch