Analysis paralysis – how to get the true measure of clinical trials
pharmafile | July 20, 2011 | Feature | Research and Development | Les Rose, clinical trials
The great physicist Lord Kelvin famously said: “When you can measure what you are speaking about, and express it in numbers, you know something about it.”
I am not necessarily going to argue that all knowledge is quantitative, and neither perhaps would Kelvin, but it seems to me reprehensible to make a decision about something quantifiable, without measuring it. Yet in pharmaceutical R&D we often seem to make major management decisions without knowing the numbers.
A few years ago I was consulting for a large development company that had invested a huge amount of money in a very complex project management system. It was supposed to be the state of the art, and I was intrigued because it was based on Critical Chain theory, a more advanced model in the discipline. The user training was long and detailed. By the time I got to see it, it had been in use for about a year.
I asked how much impact on clinical trial completion time it had made. The answer was that the company had not measured anything before, so nobody could assess whether there had been an improvement.
Analysis paralysis
By Kelvin’s dictum, this company’s decision was rather poorly informed. But that is not to say that they really didn’t measure anything.
On the contrary, everyone had to complete multifarious forms tracking all sorts of things in minute detail, so my guess is that, given the right calculations, they probably could have derived the necessary metrics. I think they might have been falling into a state of ‘analysis paralysis’, a kind of ‘If it moves, track it’ obsession, without a really big picture concept of what it’s all for. So my opening theme is, what should we measure, and why?
At the Institute of Clinical Research annual spring conference in March, I heard that a 30% improvement in study completion time could be worth $150 million for a major drug. I don’t think there is much doubt that the top metric must be cycle time, applied at all levels of detail from the whole programme right down to individuals tasks and deliverables. It has long been known that companies that blow their budgets and deliver on time, do much better financially than those who stick to budgets and deliver late – even slightly late. In pharmaceuticals, this is a particularly acute lesson to learn. This is because if you are third into an established class or market, you have very little chance of recovering your R&D costs, no matter how good your drug is. Yes there have been exceptions – for example lots of beta-blockers did fairly well back in the 1970s-80s. But I am talking about the broad sweep of all drugs.
Well that is a commercial argument, but there is also a human and ethical one. Hopefully we all believe that drug discovery and development is all about meeting unmet medical need, although I concede that genuine innovation is rather more elusive these days. If we do have a drug with important clinical benefits, does it not behove us and the regulators to deliver that to patients as soon as possible? Yet completing a clinical programme a year later than planned seems to be quite normal in my experience – at least that is what people tell me on training courses!
This rather sad performance is not for want of trying. For decades, there has been a developing movement to benchmark important cycle times and apply them across companies. CMR International, now owned by Thomson Reuters, is one of the leaders, and collects data on achieved cycle times from subscribing companies. CMR’s finding is that total clinical development time is no better than it was 10 years ago.
Digging deeper, there are regional variations, and particular stages do show improvement – for example, study set-up is getting better but patient recruitment is getting worse. Of course, such a process is totally dependent on the subscribing companies telling the truth about what they are actually achieving. There is no practical way of quality assuring the data, although as all data are anonymised there is no incentive to falsify them.
A more recent initiative is the Metrics Champion Consortium (MCC), which aims at standardisation for clinical trials. It has published survey data showing how poorly metrics are used by pharmaceutical companies, with less than 10% having any clearly defined metrics in place. Strangely, while CROs mostly generate lots of metrics for clients, sponsors typically do little or nothing with them.
MCC’s published list of metrics would form the basis for a project management template; they are grouped into stages that drive key milestones. At first glance, cycle times and quality predominate, although MCC admits that measuring quality is more difficult. This is certainly true of this member of the classic project management triumvirate. The tragedy is that it too often gets into the ‘too difficult’ tray. That is not to say that quality per se is ignored, but that it is seen as another layer or system that is somehow separate from project management.
The benefits of improving quality early on in the study are very clear, as it greatly shortens cycle time later by avoiding repeat work. So one way of rendering quality into the numbers would be to track repeat work as a distinct item – a kind of negative metric. My data management colleagues, necessarily more numerate than I, do derive useful statistics on quality, such as error rates and re-query rates (where original data queries have not been answered properly).
I have recently been looking at the timeliness of Serious Adverse Event reporting by sites, more from a regulatory standpoint, but this also feeds into data quality via reconciliation with the clinical database, which is relevant when I am preparing safety reports.
Honesty and collecting real data
Whatever is being measured, it is important that we are working with real data. I once expressed admiration for a company whose project reports showed that their actual study completion dates were almost all identical to the planned dates.
“Ah yes” they said, “that’s because senior management keeps changing the planned dates”. So percentage lateness was always zero!
Now that’s something that should be quite easy to measure truthfully. It may not always be welcome news, but I have never subscribed to the view that no news is good news. No news usually means someone is not wanting to tell you something.
What about cost? It should also be very easy to measure, as every company has an accountant. But let’s not forget that counting beans is not like counting people. For anything calling itself an organisation, its most expensive resource is people, but I am staggered that most of the companies I deal with do not track the cost of staff effort. It really isn’t difficult, but neither is it very interesting for the individuals to do. It comes down to completing time sheets, allocating blocks of time to projects and tasks. But again, it has to be realistic information.
I have known a lot of companies which get people to do this monthly – but who remembers how many hours they spent on something a month ago? As a metric this is useless, because it has to be done at least weekly, but done properly it is fantastically useful. Consider this.
We are doing study set-up, and we have 45% of sites ready to go. But we have spent 60% of the working hours allocated for that task.
Are we going to finish on time? No we won’t, unless we do something about it, such as recruiting more CRAs or reducing their workload. I have discussed the value of metrics during the study, but they are also vital after completion. How can you determine if the project was a success? That will depend on how success was defined in the plan. Quality will be one aspect of technical success, but I am thinking more of commercial success. A clinical trial might even be a commercial success if it shows that the drug does not work – this would inform the decision to stop development, which will help the company cut its losses and divert money elsewhere.
The key questions are, what did it cost us to get that decision, and how long did it take? This is part of the post-project review that is vital for planning new programmes, and which is too often overlooked.
In this way, the right metrics can help us to look into the future. Kelvin was not always right about that; he thought everything in science had been discovered, and could not see any use for radio at first. But he changed his mind when he received a wireless telegraphy message on a ship. We should similarly be prepared to change, when the numbers tell us we are wrong.
Les Rose is a freelance clinical scientist and medical writer.
Related Content

Vesper Bio reports positive topline results for dementia candidate
Vesper Bio, a clinical-stage biotech developing novel oral therapies for neurodegenerative and neuropsychiatric disorders, has …

Von Willebrand disease – increasing awareness and access to vital care
Pharmafile talks to Anthea Cherednichenko, Vice President Franchise Head Haematology and Transplant at Takeda about …

Rethinking oncology trial endpoints with generalised pairwise comparisons
For decades, oncology trials have been anchored to a familiar set of endpoints. Overall survival …






