data

Maintaining control: The rising challenges of clinical data management

pharmafile | June 18, 2018 | Feature | Business Services, Manufacturing and Production, Medical Communications, Research and Development, Sales and Marketing biotech, drugs, pharma, pharmaceutical 

From breach scandals to GDPR, data is becoming a more prominent part of our everyday lives. This holds great potential in the clinical space, but challenges are mounting rapidly, as Matt Fellows discovers.

Data has never been more valuable than it is today. From industry to industry, each one stresses the importance of data in a manner almost indistinguishable from attitudes towards other precious resources. Yet, there seems to be a pervasive indifference towards this gold dust from those who have intrinsic ownership of it; each and every one of us in the developed world generate a constant stream of data on a daily basis, to the point where it’s so innocuous and unremarkable that we cultivate a severe underestimation of its importance. It could be said that consumers are yet to reach that revelatory moment when the inherent value of every retail purchase made or YouTube video watched is fully realised.

Despite this, and also perhaps because of this, data and its use or misuse has very much been a topic of discussion in recent months, crashing into public consciousness with the Cambridge Analytica/Facebook scandal, which also brings into a sobering context the necessity of another major development in the space: the recent introduction of the General Data Protection Regulation (GDPR) in the European Union.

While data is an invaluable resource that is much sought-after by many companies to promote growth of their business, in the healthcare space the efficient and responsible use of data could be one of the cornerstones that spearheads the development of better treatment outcomes and healthier populations. From real-world data which extends beyond the sometimes restrictively formal confines of clinical trials, to the nascent and yet-to-be-realised potential that social media data could add to the equation, the pool of sources and the benefits they offer to patients and clinical professionals is ever expanding. However, the utility of these data isn’t lying out in the open, and requires hard investment in order to be properly harvested and see real benefit.

It is a complex affair, but the ever-widening breadth of different sources and shifting attitudes toward the regulation of data make the issue much more challenging to navigate, but that we do so is imperative.

Assessing current efficiency

The data management function of any clinical or healthcare institution, and indeed any company from virtually any industry, serves a crucial role in this landscape, and as the challenges mount, ensuring these functions operate with efficiency and effectiveness is key, particularly when time is at a premium and in extreme cases could mean the difference between life and death for patients. And that’s exactly what the Tufts Center for the Study of Drug Development (CSDD) set out to accurately assess with its 2017 eClinical Landscape Study, one of the largest and most in-depth investigations of its kind. At this stage in the game, the findings that the study revealed could be transformative for clinical projects, but before we explore that, it’s important to understand how basic clinical data management functions operate, as Kenneth Getz, Director of Sponsored Research and Research Associate Professor at Tufts CSDD, outlined for Pharmafocus: “Data is gathered using the case report form and other procedures that are performed, and the data is then entered into a study database, and that database then contains all of the clinical data by patients. It’s aggregated and that data will be cleaned, it will be checked for inconsistencies and errors, and once the data is clean and its integrity has been assessed and it has been validated to its sources, then the database is locked, and we then move into the analysis phase and the results will be presented in a submission to a regulatory agency.

“That’s the most traditional form of data management and sharing, and the world has changed a lot since the days of the simple case report form and lab data that’s entered into the database, which is then captured by the primary electronic data system at a company,” he continued. “Now, from the complexity of our protocol designs, to the volume of data that’s collected, the different types of data which may include electronic health and medical information, data from wearable devices and smartphones, data from social media – the diversity and volume of data is taking the traditional data management practice and turning it on its head. So we conducted this study to get a read on how the data management function is doing today and to try and understand what impact complexity is having in terms of burden on the data management function and its performance.

“We go out to a broad swathe of companies from emerging biotech and pharma all the way up to the major pharma and biotech companies. In this study we got a tremendous response: 260 unique and verified companies participated. These were typically individuals in the data management function at a more senior level, and they had more than 10 years of experience in that function on average.”

This broad and robust sample size has allowed for invaluable snapshot of the prevailing data management protocols and their efficacy in this rapidly changing climate, and revealed a number of crucial insights into the timeframes surrounding clinical data handling – and some were quite surprising.

“The eClinical Landscape Study was really trying to put hard data around some of the critical milestones in the data management process,” Getz continues, “and what we found is that on average it’s taking about 36 days to actually lock a database, it’s taking nearly 70 days to build a database, and it takes more than a week from when the patient completes their visit with the investigative site to when the data is inputted into the database. Those cycle times are much longer than people expected, in part because we’ve moved into a more digital environment. I think there was the expectation that we would accelerate cycle times, but they’re all longer than they used to be.”

Sizing up the coming challenges

These findings paint an immediately problematic picture, where the industry’s capacity to effectively process data is falling behind projections due to unforeseen complications. But why exactly is this the case? Could this be due to the rise in complexity and breadth of data sources, or something more? Getz explains: “It’s important to recognise this as part of a chain reaction to an underlying or root cause, and the root cause is the protocol design practices,” he says. “We look at the impact of complexity on the study start-up process and the challenge of finding investigators; we look at its impact on recruitment and retention; and we’ve now looked at its impact on the data management function, and what we’ve found is that complexity is driving up data volume, and not only is the volume rising but the diversity, as defined by the multiple sources of data applications, is skyrocketing. And that is creating a lot of challenges for organisations. There’s the challenge of accommodation as you’re now trying to accommodate multiple data streams and a high volume of data, and it’s a challenge in terms of cleaning, curating and integrating the data.

So, could it be that these challenges are in part due to a failure of infrastructure and processes to keep pace with modern demands in the handling of these higher volumes and broader sources of data?

“The answer is more nuanced and more complicated” Getz notes. “In a lot of ways the technology is already there, but the processes, the operating procedures, the fragmentation across functions, the delays and inefficiencies between different vendors that are each charged with collecting and returning a piece of data like imaging or lab data – it’s more the disparate and fragmented elements that have to be coordinated and integrated that’s causing much of the challenge.

“Protocol complexity has moved beyond a closed data management environment and it keeps pushing itself into a more open and critical external coordination process that we’re having trouble managing […] It’s really a coordination and integration game now, which wasn’t the case before.”

Beyond these extended milestones and timeframes surrounding the management of data, Getz brought attention to another illuminating finding of the study: “What was even more interesting for us was the variation round the averages: the variation is much higher now than we observed ten years ago, which means there is less consistent practice within and between companies. Even though the average is longer, there’s so much variation around the average, and that is a really important measure of burden and inefficiency.

“The sub-group comparisons caught our attention – the difference in cycle times between the clinical resources organisations (CROs) and the sponsor. We generally saw that studies where the data was managed by CRO staff tended to have a faster cycle time, which could suggest that CROs are adhering to more of a standard practice than sponsors. CROs are also more keenly focused on their profitability, so perhaps they’re pushing their resources harder to hit certain timelines. It’s raised some questions about what sorts of practices could be implemented in the short term that would help some of the sponsor companies achieve faster cycle times.”

How do organisations tackle the data management challenges highlighted in the report? The answer may actually lie in these varying cycle times.

“Any time we detect that level of variance, it suggests to us that there are probably some best and worst practices that we can ferret out,” Getz notes. “In some cases it may solely be a function of the hyper-complexity of the design of the study, but in other cases it may be tied more to specific practices and the ways that the data coming in from multiple sources is being managed.”

To succeed and overcome these challenges, companies should look to emulate the best practices conducted by those groups highlighted in the report that achieved the most efficient cycle times. These solutions may involve, as Getz outlines, “everything from a blockchain technology to some kind of data hub or unified data repository that can accommodate disparate and diverse data streams”. But the next step for Tufts CSDD and the wider industry may be to delve deeper into these best practices, and identify in detail exactly what it is that enables them to produce better, more efficient results.

More regulation, more problems?

Within the EU, a major disruptive force in the data handling framework of the pharma, healthcare and life sciences industries and beyond is the advent of GDPR, which came into effect on 25 May. The need for such regulation is without question in light of recent highly publicised data breach scandals. Pharmafocus spoke to Richard Binns, Partner at Simmons & Simmons, to drill down into exactly what effect the new regulation will have on the management of data within the clinical space.

“Data generated from clinical trials offers a wealth of opportunities, but as this data is likely to constitute ‘sensitive personal data’, it is only right that the control and processing of this data should be subject to strict data protection laws such as the GDPR,” he explains, and draws an important connection to how the challenges discovered in Tufts’ eClinical Landscape Study have in part necessitated the introduction of such measures: “The rise in the use of very large and diverse data sets – so called ‘big data’ – is now increasingly important in research and clinical trials and such use poses new challenges for data security and privacy. This was one of the drivers for change which resulted in the GDPR.”

This again reinforces the argument that the complexity and breadth of data is one of the foremost obstacles to be overcome by the industry. But what challenges does GDPR itself pose to those in the clinical space?

“For those involved in running clinical trials, the GDPR will require the strengthening of IT data security requirements to consider “privacy by design” when building their IT platforms to collect, process and store data,” Binns outlines. “A data protection impact assessment is also likely to be required. There will need to be other changes, such as the need to appoint a data protection officer where, for example, the company’s core activities consist of the processing of sensitive data on a large scale and the maintenance of a data processing register. If a company cannot rely on the ‘medical diagnosis or treatment’, ‘public health’ or ‘scientific research’ grounds, it will have to obtain explicit consent from an individual taking part in a trial using clear and plain language, and individuals must be able to withdraw their consent easily. The individual will, in any event, also need to be made aware of a number of matters regarding the use and storage of their data. This will require review of the existing wording in informed consent forms. In addition, technical and organisational safeguards and measures such as pseudonymisation of data should be applied to ensure data minimisation.

“All envisaged processing must be covered in the initial request for consent from the data subject, including any potential intention to mine that data in future,” he continues. “Where a company running a trial is acting as data controller it will be required to plan for data protection from the outset. They will need to demonstrate that they have appropriate technical and organisational measures in place to implement data-protection principles, such as data minimisation and to ensure that personal data is only stored and processed to the extent absolutely necessary. Data controllers processing data concerning health will also be required to perform a privacy impact assessment prior to processing focussing on the proportionality and necessity of processing the data, understanding the potential risks for data subjects and putting in place measures to mitigate such risks. More than ever before, pharma and life sciences will now need to be able to demonstrate that they are adequately protecting the rights of data subjects.”

The reach of GDPR

Binns also confirmed that the tenets of GDPR do extend to companies operating outside of the EU when their customers are EU citizens. Likewise, the regulation will affect all ongoing trials after it comes into effect, regardless of whether they began before 25 May. However, another query is how the UK’s exit from the EU in 2019 will affect its compliance to the regulation, but Binns put these concerns to rest too: “Brexit is not expected to affect the GDPR’s implementation and the intention is that when the UK leaves the EU, it will be incorporated into UK domestic law under the European Union (Withdrawal) Bill. Also currently before parliament is the Data Protection Bill 2017-19 – intended to replace the Data Protection Act 1998 – which will provide a comprehensive legal framework for data protection in the UK, supplemented by the GDPR.”

Despite the demand for industry to meet these challenges, there are a lot of very obvious upsides to the advent of GDPR, particularly in light of recent heightened sensitivities towards the issue of data privacy, but could it bring with it any other foreseeable drawbacks? Getz gave his thoughts: “I think the only downside is if it restricts or limits our ability to use some data that could really inform the development of a new treatment. We don’t know exactly what that would be yet, but we certainly know that with social media data, a lot of the patient-reported outcomes data, quality of life measures and mobile health data, there may be places where we’re not going to be able to use certain data under GDPR.”

Perhaps the truest effects of the regulation will only come to light in practice, but, as Binns notes, there is more than simply a moral obligation for organisations to comply: “Those companies involved in clinical trials should be aware of the changes that the GDPR will introduce and review their existing policies, procedures, and practices to ensure compliance. Failing to do so may prove costly: data protection authorities have been given more robust powers to enforce non-compliance under the GDPR, with fines of up to €20 million or 4% of annual global turnover, whichever the greater, for the most serious of breaches.”

Related Content

FDA approves IMIDEX’s AI-powered device VisiRad XR

The technological pharmaceutical company IMIDEX has been granted clearance from the US Food and Drug …

Artiva Biotherapeutics announces FDA clearance of IND for AlloNK and Rituximab combo

On 16 August 2023, the US Food and Drug Administration (FDA) officially cleared Artiva Biotherapeutics’ …

Zumutor’s cancer drug trial cleared by FDA

On 11 August 2023, the biopharmaceutical company Zumutor Biologics announced that the trial of its …

Latest content