Publish and be damned?

pharmafile | February 10, 2009 | Feature | Research and Development |  clinical trials, publication 

Science publishing is very big business, as Robert Maxwell discovered at the end of World War II. My impression is that, no matter how poor or bizarre your piece of research, you will find a journal somewhere in the world that will publish it. Throw commercial sponsorship into the pot, and it can get very messy indeed. Publication bias caught more attention in 2008, but it has long been recognised as a major problem. It's not simply a matter of selection in favour of interesting and positive results. Several studies have identified a clear association between commercial sponsorship of a clinical trial and the significance of the results.

The power of sponsorship was brought home to me by Richard Smith, erstwhile editor of the British Medical Journal, who explained at a meeting of the charity HealthWatch, two years ago, that a journal can make a million pounds from selling reprints. Indeed, this is where the profit lies. A nice positive paper published in a major journal is just the thing for the sales reps to have in their bags, and a global company could well order half a million reprints at £2 a throw. What journal is going to reject a paper that has that sort of inducement attached?

Ah, you will say, but that's the purpose of peer review. Now, supportive as I am of that (and it is one of the bedrocks of science), it has its limitations. There is no way that reviewers can have any idea of the quality of the raw data, or indeed that the study really was conducted as rigorously as the authors claim. There is a famous example of this from the world of alternative medicine.

Advertisement

In 1988 the journal Nature reluctantly published a paper by Jacques Benveniste and colleagues that apparently showed an effect of ultra-dilute solutions of histamine on basophil degranulation. For the uninitiated, 'ultra-dilute' means so dilute that not a single molecule of histamine remained in the solution. John Maddox, the editor of Nature, had to publish the paper as it passed peer review but, because the underlying 'science' was so implausible, insisted on replication of the experiment. It was duly repeated, this time under very strictly controlled and observed conditions, and no effect was seen (it turned out that the observer wasn't blinded to the test samples). Thus one of the foundations of homeopathy was washed away by the spurious memory of water. We have to ask ourselves how many papers with plausible hypotheses were in reality compromised by poor study conduct.

But let me return for a moment to the question of raw data. It is widely recognised that meta-analyses and systematic reviews are the pinnacle of the hierarchy of evidence in clinical science. The former require the pooling of data, and I am surprised at how often meta-analysts who request raw data from study authors are disappointed. Sometimes authors simply don't respond to requests for data, which I regard as a violation of the principles of science. Indeed, the World Wide Web was invented in order for scientists to exchange information, so nobody can be unaware of the need to do that. To be fair, I think the main offenders are not pharmaceutical companies but various non-commercial sponsors such as academic institutions. Since UK legislation on good clinical practice came into force, the playing field should have become a lot more level, but I am not sure that it has. Anyway, the message here is that if a meta-analysis lacks some of the extant and relevant data, can it be valid?

So we should not fool ourselves that everything that is published can be believed, which takes us back to publication bias. The observation that commercially sponsored trials have more positive results raises two questions at least. One is that the actual results might be in some way false, because of commercial pressure. This would be a very serious allegation, and in all my years in this field I have seen very little evidence of it. What I do see is over-optimistic interpretation of less than impressive data. Sometimes the conclusions seem to bear little relation to the results. It is rather clearer that some trials that didn't show what the sponsor wanted don't get published. This isn't totally the fault of sponsors, as journals are always looking for newsworthy papers and a negative study is generally not considered to be interesting. Or I should correct that – it would get a lot of attention if it overturned a widely accepted medical practice, which does happen but not often. A good example is tonsillectomy, which is now known to be totally ineffective against recurrent sore throats.

Hence the new standard that all 'hypothesis testing' trials should be published. This excludes, for the most part, phase I trials that are investigating pharmacokinetics, metabolism and other factors not directly related to efficacy or safety. This initiative is linked to compulsory trial registration, in that a trial can't start without being registered on an official database (e.g. EUDRACT in the EU), and for every registered trial there must eventually be a publication. Various bodies, including the FDA and WHO, have issued specifications for what should be published, and space precludes an explanation of these here. But there are many unanswered questions. For example, under the US legislation the sponsor has the option of publishing on www.clinicaltrials.gov or in a conventional journal, and it isn't clear whether the former carries the same authority or even how such a publication should be cited. For a detailed discussion on this, I can recommend Liz Wager's expert review on the FDA legislation (KeywordPharma publications).

It's a pity that to a large extent the pharmaceutical industry has been embarrassed into publishing all clinical trials of efficacy and safety, rather than being sufficiently proud of its standards to be open all along. However, there are problems with rushing into print. Companies are beholden to their shareholders, who have to be kept happy, so any bit of good news is gold dust – almost literally for those supported by venture capital. Such companies commonly issue news releases the moment they have some positive results on a development project, and worry little about peer review. It is possible for problems with the data to be identified after going public with the results, something that I have seen happen, and demonstrates yet again the potential conflicts between science and commerce.

So, I think that in general there is a significant risk of being damned by publication, but we will certainly be damned if we don't. The industry has much to do to rebuild confidence in its claims.

Related Content

Vesper Bio reports positive topline results for dementia candidate

Vesper Bio, a clinical-stage biotech developing novel oral therapies for neurodegenerative and neuropsychiatric disorders, has …

Von Willebrand disease – increasing awareness and access to vital care

Pharmafile talks to Anthea Cherednichenko, Vice President Franchise Head Haematology and Transplant at Takeda about …

Rethinking oncology trial endpoints with generalised pairwise comparisons

For decades, oncology trials have been anchored to a familiar set of endpoints. Overall survival …

The Gateway to Local Adoption Series

Latest content