Time to get back to our scientific basics

pharmafile | August 23, 2006 | Feature | Research and Development |  biotech, drug safety, industry reputation 

Recently I was wandering through the local market, and found myself in front of a stall selling therapeutic magnets. I picked up a leaflet from this stall and, inside, it recommended magnets for a wide range of conditions, such as arthritis, bursitis, sciatica, frozen shoulder, migraine, and many more. The leaflet declared that magnets improve circulation, and some obviously fake photographs of erythrocytes were displayed.

Now the point of this anecdote is not that this activity is misleading and almost certainly illegal, but that people believe it. I could not resist getting into a conversation with the stallholder, and a customer joined the discussion, proudly displaying the magnet strapped to his wrist, which he was convinced had cured his arthritis.

These two were not interested in the slightest in the many clinical trials that have shown no effects of magnets on health. They were completely convinced of their value by anecdotal evidence, not clinical trials, and I couldn't help wondering what they would think about developing drugs on such a basis. Would they be happy to accept, say, candesartan if only 50 people had claimed it controlled their blood pressure, but without providing the appropriate supporting data?

Advertisement

Evidence in conflict

When I first joined the pharmaceutical industry in 1974, I quite quickly formed the impression that it was not very good at putting its case to the general public, and particularly at defending itself against criticism. I am not totally convinced that the position has improved much since then. If it had, most people would understand what clinical trials are by now, even if not in detail, and would not waste their money on health fads, which do not provide any concrete evidence.

But the latter are currently enjoying a resurgence, driven it seems by political and royal role models, and television personalities, such as the so-called detox experts. While all this is going on, drug companies are forced to spend steeply escalating amounts on R&D, while profitability is severely compromised by price pressure and regulation. Maybe the companies have something to learn about the way they generate and disseminate the evidence they need.

The recent alarming events in a phase I unit at Northwick Park Hospital certainly brought home to me that clinical trials are poorly understood by most lay people, and even by some science journalists. On some television reports, phase I studies were presented as looking solely at safety, when of course, they are also an important stage in defining the pharmacokinetics of the drug. In addition, efficacy can be, to some extent, predicted at this initial stage if we have an appropriate pharmacodynamic model.

One science journalist even stated that women were never studied in phase I, which is quite untrue, as regulators are anxious to obtain early phase data from females.

The sad effect of such misleading and often sensational reporting is to undermine the public's confidence in science, when in fact, the event's extreme rarity should have the opposite effect of strengthening it.

A process to be proud of

The truth is that the industry should be proud of the rigorous and meticulous process that has developed, and is still developing, over many years. I will make no apologies for reminding readers that by the time a drug reaches the market, a very consistent picture has been built up of what it is and what it does.

In my early days, quite often we did not have a very clear idea of a drug's mode of action, and I well remember being involved in the launch of a new antihypertensive, and being given a quite erroneous mechanistic explanation. This did not, of course, mean it didn't work, but it did have an impact on the drug's eventual place in the armamentarium.

Today, with many new drugs, especially biologics, being designed to interact with receptors whose structures we know, we are starting the process from a much stronger position. This knowledge will put our pre-clinical in vitro and in vivo studies into better context. With reproducibility a key test of hypotheses in science, as we progress through the clinical phases, we hope that each succeeding study will support the story building up. But, of course, most of the time this does not happen, which is why only about 5% of drugs going into phase I make it onto the market. This is a high risk business we are in.

However, I am not arguing that the drug development process does not need improvement. Several innovative ideas are being discussed and may be being tried out right now; for example, biomarkers offer the possibility of testing efficacy in phase I, while larger phase II studies could save money by giving us more reliable knowledge earlier on in the process.

Reducing uncertainty

If the general public has little understanding of industry-sponsored clinical trials, it may well know even less about non-commercial research. Yet this is something in which the NHS and the universities, as well as the government directly, invest significantly. The advent of the EU Clinical Trials Directive triggered an enormous protest from academics a couple of years ago, who argued for much less rigorous regulation, but I have never been able to see the logic of an uneven playing field. Science is the process of reducing uncertainty, so are we prepared to accept more uncertainty coming from science carried out without commercial sponsorship?

Nevertheless, there has been an increase in publications describing all sorts of studies, which depart from the established randomised controlled trial design. Examples would be pragmatic designs, which commonly have very loose selection criteria, registry studies, in which patient data may be mined from medical records (prospectively, or more usually retrospectively), and cohort studies.

In hard science, such as physics, it is often easy to design a clean experiment, because the experimenter will be able to control most, if not all, of the factors which can affect the results. But biological material is extremely complex, and we do not know all of these factors – in truth, we know hardly any of them. Thus, we have to use such devices as random selection to try to even out these confounding effects.

That is also why we use carefully defined selection criteria. The proponents of pragmatic designs and their ilk argue that they more closely resemble real life, which may be true, but they require much larger numbers of patients to detect efficacy.

This also highlights an effect I have observed many times over the years. Small studies often look very encouraging, even if they were never designed with the statistical power to detect significant differences. So sponsors scale up the small study, only to be disappointed that their much larger study fails to show anything. This is just what happened with the DEFIANT and DEFIANT II studies with nisoldipine about 10 years ago. It happens between phases as well. A phase II dose-ranging study may look good, only for the larger phase III to miss the target.

Blurring the boundaries

The idea of pragmatic designs also raises the question of generalisability. Critics rightly argue that a tightly controlled trial may not be much help with clinical decision-making, as the clinician sees a wider range of patients. My view is that we cannot abandon or dilute the rigorous approach because before we try to generalise the results, we have to have them; i.e. we must know what the drug does.

To do this, we must use Occam's Razor, stating as few assumptions as possible, and exclude other factors. After that, we must look at ways to apply the results. This is a message that seems to be poorly understood in some academic circles. Some researchers into alternative therapies, such as acupuncture, are keen to blur the distinction between effects specific to the therapy, and non-specific ones.

The latter relate to the context within which the therapy is given, and are commonly interpreted as a placebo effect. Such an approach would never be accepted by drug regulators, and should not be accepted by anyone if we want a level playing field.

Some academics even criticise the use of the randomised controlled trial, claiming that it is not appropriate for testing certain non-drug interventions. They should understand that such a design is not confined to medical research. It is the basis of how we accumulate knowledge about the universe as a whole.

I have drawn a distinction between hard science and biological experiments, but only to illustrate a point. The scientific method is fundamentally the same for both. If science is the reduction of uncertainty, then medical science is just a branch of it where uncertainty is harder to remove.

Unfortunately the lay media and the general public find it very difficult to deal with risk and probability, and prefer absolutes. If probability were generally understood, it would be impossible to run the National Lottery.

Thus, the communication of information about drugs, obtained via clinical trials, is a major challenge. Now that the industry has agreed to publish all hypothesis-testing studies, how are they to be written?

Presumably, they will be intended for general as well as specialist audiences, particularly as web publishing is a preferred medium. The companies should see this effort not simply as an obligation, but as a major opportunity in public education.

At last, we have an audience that is listening. It is possible to explain uncertainty by comparing what we see in clinical trials with chances which people take every day. While this is going on, we will ourselves begin to learn a lot more about communicating these messages to patients, for example on labelling and package inserts.

How safe is safe?

This reminds me of a favourite objection of mine. I really don't like the word safe in the context of drugs. Safe to me means without danger, and nothing is like that  there is always some risk, which we have to make acceptable.

I remember once lying on an operating table having day surgery under local anaesthetic. A nurse was given the task of chatting to me to keep my mind off what was going on elsewhere. When she learned what I did for a living, she declared that if a single patient ever suffered a serious side-effect, then the drug should be withdrawn from the market.

I replied that if that were done, the practice of medicine would pretty much come to a halt, because it would mean that almost no drugs could be used. If risk is so poorly understood in the paramedical professions, it shows just how much work we still have to do with patients.

This is an initiative that the present government should welcome, committed as it is to getting patients closely involved with their health. But the public sector delivers very mixed messages on health. A case in point would be the Prescription Pricing Authority, which recently approved magnets for leg ulcer healing  in the complete absence of any evidence. A colleague of mine requested the justification for this decision from the PPA under the Freedom of Information Act   was it surprising to learn that they didn't have the papers any more? They have all gone back to the Department of Health. I could predict that they had fallen into some black hole between the two.

This kind of thing worries me a great deal. By allowing such decisions, the government is telling patients that rigorous scientific evidence is not important. Again, there is a double standard; a very tough one for our industry and another one based on New Age fads and fashions. Orthodox drug R&D is not perfect or in any way infallible, but it does deliver low levels of uncertainty. It seems that some decision-makers prefer high uncertainty or even mystery, and the damage is even wider than health care, because it undermines respect for evidence and science itself.

Drug companies are not charities, and will always be obliged to deliver returns to their shareholders. But they do at least base their product research and development, and marketing, on real evidence, not guesswork.

The industry is not without mistakes, omissions and abuses, and neither is any other industry. I work with people who are dedicated to generating reliable evidence, and it takes huge effort, sharp minds, and massive resources. These are the messages we should be getting across to patients and government.

 

Les Rose is a freelance writer and clinical science consultant with Pharmavision Consulting. For more information, email: lesrose@ntlworld.com

Related Content

Cellbyte raises $2.75m to fund pharma drug launch platform

Cellbyte has announced that it has raised $2.75m in seed funding for the streamlining of …

Lilly opens fourth US Gateway Labs site

Eli Lilly has opened its newest Lilly Gateway Labs (LGL) site in San Diego, California, …

drug-trials

LGC Group opens $100M Organic Chemistry Synthesis Centre of Excellence

LGC Group, a life sciences company, has opened its new Organic Chemistry Synthesis Centre of …

The Gateway to Local Adoption Series

Latest content