Survival of the fittest

pharmafile | August 1, 2005 | Feature | Sales and Marketing |   

The title for this piece was inspired by a sort of scientific hobby I have, which revolves around critical thinking and the need for evidence to back up decisions. I won't define the terms just yet, because I want to consider first what is happening in clinical research and development, and also you might like to think about which of the two scenarios in the title fits more closely.

I will make no apology for going over some of the key features of the drug development arena today:

  • Clinical development is no faster than it was 10 years ago.
  • Far fewer new drugs are being brought to market than five years ago.
  • Drugs continue to fail in phase III, when most of the cost has been incurred, or worse, after launch when safety issues emerge.
  • Drug development costs, of which clinical trials form the greatest proportion, continue to rise faster than sales.
  • About 70% of new drugs launched fail to recoup their R&D costs.

Yes, many authors continue to make this point, but do the decision-makers actually change their behaviour as a result? Despite some celebrated success stories, the very reliable data collected and reported by CMR International (www.cmr.org), shows the overall trend is not encouraging. So let's remind ourselves of what the pharma industry needs to achieve, in order to slow or reverse this trend:

Advertisement

1) The number of drug candidates coming out of discovery needs to increase with an emphasis on unmet medical need and truly innovative compounds.

2) Lead candidates going into clinical trials need a much better chance of coming to market. At present no more than 20% of drugs in phase I do so.

3) Compounds which are going to fail need to be identified earlier.

4) Clinical development needs to be considerably faster, while containing costs.

All these factors are interdependent – achieving only one will have limited impact. For example, if the number of new compounds going into clinical trials does increase, could the existing machinery cope? It seems unlikely, as it is unable to reduce development times even with a depressed input of compounds. Now a big increase in candidates is just what we have been told to expect.

Flood of new compounds

Pharmacogenomics is tipped to provide not quite a flood, but certainly a lot more interesting compounds over the coming years. Indeed, some companies are making strenuous efforts to increase their compound libraries. For example, Pfizer is reported to be investing over $500 million over six years to increase its internal library to three million screenable compounds. This is expected to give the company high quality leads, better quality candidates and consequently lower development attrition rates. The big question is how will they sift out the winners from such a huge number of runners?

Moving out of the comfort zone

I won't attempt to cover the whole field of clinical technolgy trials but will address an interesting underlying question – how people select and use these technologies. In other words, do they use technology to change the way they do things, or do they want to stay within their comfort zone?

Pre-clinical warnings

But let's take a step back, into the warnings we are supposed to get from pre-clinical development. It is not my primary area, but I have had recent experience of being rather misled by standard pre-clinical safety tests.

The two most frequent reasons for post-marketing drug withdrawal are liver toxicity and QTc prolongation. I will take the latter as one example. For those unfamiliar with the field, it is the lengthening of the cardiac action potential, as measured by the QT interval on the ECG. This can predispose to potentially life-threatening cardiac arrhythmias. The little 'c' means that the measurement has been corrected for heart rate.

Well, it has not actually been corrected at all, because the 'correction factor' is only an approximation. In pre-clinical studies, a standard approach is to measure the effect of the drug on cardiac conduction in vitro, commonly using Purkinje fibres from a sheep heart. Although a useful guide, it is not definitive, so ECG recordings are obligatory in phase I. Many drugs can cause QTc prolongation, but why have they gone all the way through development?

The answer lies in the process of collecting and analysing ECG data. This is close to my heart(!), because I used to operate an ECG reporting service several years ago. I have more recently learned that there is not much value in doing this unless attention is paid to the source data, the recordings themselves. Because every patient has an individual set of cardiac receptors (genetically determined), they will have their own relationship between heart rate and QT interval, and thus will need their own correction factor. This can now be calculated using specialist software, as long as there are enough recordings of sufficient quality.

Changing the clinical development process

From this, it can be seen how the clinical development process would need to change. With such a high risk of failure, we would need the earliest possible warning that a QTc signal might exist, suggesting that an intensive ECG study should be carried out in humans at the first opportunity.

The need for multiple baseline ECGs can easily be met by using ambulatory recordings, a technology which has been in use for decades. But on top of that, the FDA now requires digitised ECG traces to form a part of the product licence submission, which three years ago only a few ECG reporting companies could provide. This requirement clearly emphasises how critical a factor this is today.

If pulling a drug from the market is a disaster, pulling it from late development is nearly as bad. Indeed it might be considered worse, as it has had no chance of making any sales. For this reason, there has been increased interest in identifying biomarkers, which could be used in earlier studies to collect efficacy data. This is not really brand new technology. If you go to www.biomarkers.org, you will see a large number of biomarkers for a wide range of diseases. All these are used in conjunction with imaging technologies, such as PET and SPECT.

Using biomarkers

A good example is in degenerative neurological conditions, such as Parkinson's and Alzheimer's diseases. In the former, five biomarkers are listed, of which  two,

dopamine transporter, and dopaminergic neurotransmitter activity, are among the most widely used. But these technologies remain subject to heated debate in the literature.

Essentially, they are indirect measures of what we really want to look at, in this case the loss of dopaminergic neurones. They do not always show consistent results between studies, and this raises a regulatory spectre. Any biomarker will have to satisfy the regulators that it is a valid measure, which requires a substantial investment. On top of that, if it is an invasive procedure, it will need its own full set of safety data as well. Thus there is considerable attraction to the use of non-invasive biomarkers.

For instance, breath analysis can be used to assess the extent of airway inflammation in COPD and asthma. Among the various exhaled compounds of interest, nitric oxide has been investigated the most – this is increased in both conditions. Also, breath temperature can be used to distinguish between the two – it is elevated in asthma and depressed in COPD. Thus, even for one therapeutic area several non-invasive biomarkers are available.

Efficacy clues using heranostics

But these biomarkers are mostly confined to patients with diseases under study, and can be used earlier than phase II. What about efficacy clues in phase I? Companies are now using biomarkers in phase I and IIa, and are using the technology to combine diagnostics with therapeutics – so-called 'theranostics' (www.axisshield.com/pdfs/Theranostics_flyer.pdf).

They are finding it easier to identify the right patients for clinical trials, and to predict and monitor response. There is the prospect of running smaller and more targeted clinical trials, thus saving money. But these are very early days for these technologies. Few companies have the core capabilities to develop them into validated markers acceptable to regulatory authorities, so the obvious advantages have not yet fed into later pivotal studies.

Focusing on degenerative diseases

In the light of the demographic changes with which we are familiar, there must be good commercial sense in a focus on degenerative diseases.

Another prospect is that of disease progression modelling, essentially a mathematical exercise. The problem with many clinical trials is that it is difficult to distinguish between symptomatic and protective effects. Taking the example of Parkinson's disease, we may be able to show an improvement in motor performance, but is this a short-term filling of the dopamine gap, or a reduction in the loss of dopaminergic neurones?

The two possible effects of the drug, symptomatic and protective, can be represented by graphs of disease status. A symptomatic effect will cause the whole line to be offset upwards or downwards, whereas a protective effect will change the slope of the line.

Without going into more detail than I have space here, these effects can be used to design studies which can detect a protective effect with a shorter duration of exposure. The implications for drug development are obvious, in terms of time and cost, but again there is the regulatory dimension. Not long ago I was involved in the setting up of a project to develop a disease progression model, and a key stage was a meeting with the regulatory authority to explain the approach. In addition, such methods are technically demanding and require staff to be well trained which opens up the whole issue of implementation.

Implementing new technologies

Implementing new technologies involves business change, and this needs a solid business case. This requires a cost-benefit analysis, but both components of this can be difficult to quantify.

In the example of the disease progression model set-up, my most difficult task was defining the business case. The company commissioning the work had not really thought about what it would get out of it in commercial terms. Neither had they considered the cost of implementation. Certainly, most technologies will have clear direct costs – it's not too difficult to estimate the cost of each PET scan using dopamine transporter imaging. This will be based on the amortised equipment cost, materials, and staff time. But is that the end of it? Here are a few examples of other costs:

  • Blinded observer reporting and review of images
  • Data analysis
  • Image archiving required because they are source data
  • Validation of any new methods, if used
  • Training your CRAs in how to monitor studies using the new methods

This is clearly going to be much more expensive than using the traditional motor activity rating scales for Parkinson's disease. Will it be worth it? Well, it may be that the regulatory goalposts have moved, and rating scales are no longer adequate to get a licence, but if they have not, then what's the commercial benefit from spending the extra money?

Evolution or Intelligent Design?

Clearly, financial modelling needs to be done. What stands in the way of this is the Chinese wall that too often is erected between marketing and R&D. In a company of which I have fairly recent experience, none of the clinical project managers knew the business case for each of the studies they were running. In fact, such a discussion was deliberately embargoed by senior clinical managers.

To return to our theme, evolution is what Charles Darwin thought it was, and the basic concept has changed little since he shook the world with his ideas on natural selection. 'Intelligent Design' is among the concepts offered by the proponents of creationism, who believe that all living things were crafted by an external intelligence.

I am wondering into which camp the improvement of the drug development process falls. Is it essentially reactive, in which new practices and technologies are only implemented when driven by regulatory or market changes (ie, the environment)? Or is it proactive – companies have a long-term vision, and are speculatively designing strategies which include technologies required for the future?

The truth is probably somewhere between the two, but the industry is remarkably conservative and may need to be more creative. Only a few examples of the new technologies available for clinical trials have been considered here, but clearly there is a huge potential for a positive effect on the worrying trends identified.

These technologies can be used to ensure that far more of the compounds which survive, do so because they really are the fittest. Evolution is blind, but intelligent design, if it exists, is not.

Related Content

No items found
The Gateway to Local Adoption Series

Latest content



Pace appoints Ken Beyer as CEO

December 10, 2025