artificial-intelligence-2228610_960_720

Data-driven insights: Keeping pace with the technological boom

pharmafile | June 8, 2017 | Feature | Business Services, Manufacturing and Production, Medical Communications, Research and Development, Sales and Marketing Big Data, data, technology 

If you want to understand the importance of data, look at Google and Facebook – neither company is out of their teenage years and yet they are two of the most valuable companies in the world through the power of information. Ben Hargreaves examines how the data revolution is now transforming the pharmaceutical landscape

The importance of data in the pharmaceutical industry has grown enormously in the last decade as it strides to adapt to the digital age that we now live in. This means that many pharmaceutical companies are now engaging with patients on social media, even setting up their own social media networks within particular disease areas.

The transition has not necessarily been an entirely smooth one – the pharma industry still retains a reputation as being conservative when it comes to grasping the latest innovations, preferring to dip its toe in the water first or watch how rivals get on making the move. This is a completely understandable approach, as the big pharma companies are multinational corporations that can be unwieldy to rapid change to say nothing of the cost of implementing any overhauls. If real world data, for example, becomes more widely adopted – the initial start-up costs would be far larger for the bigger companies that run a large number of trials.

There is no doubt that pharma companies are realising the vast potential for data-driven solutions. The larger organisations are acquiring and collaborating with technology-based drug discovery firms at a rapid rate. Many have already teamed up with the bigger technology firms, such as IBM that offers Watson Health for Drug Discovery, in order to gain the advantages that big data companies possess without having to incur the huge financial expense of bringing such expertise in house.

The pharma industry has been noted, in the last few years, to being primarily driven by an M&A approach to growth. This, when coupled with the hesitation on the part of the larger companies to adopt developing technology, has allowed for smaller, more agile biotechs to thrive – with the tendency to adapt or invent new means of discovering potential therapies and to be increasingly focused in their drug development process. They are then often snapped up by larger organisations eager to get their hands on the technology or their pipeline.

This has led to the current climate which people refer to as a ‘biotech boom’ or, perhaps more ominously, a ‘biotech bubble’. The valuations of biotech companies are liable to spike or plummet on single press releases, but when they spike on positive news for a drug candidate, there can be a rush of offers for the company. It leads to situations where companies are developed, much as in the tech sector, with the aim of achieving that one successful hit and the buyout that follows.

One individual that has already achieved a big buy-out but is back in the game of developing a company is BenevolentAI Founder and Director, Ken Mulvany. He was the CEO of drug development company Proximagen, a company that was eventually sold for a deal worth up to $555 million to Upsher-Smith. He has since begun a new venture, using AI to identify potential drug candidates.

The area of AI within the healthcare industry is rapidly growing; a computer can assess vast quantities of data and analyse them using algorithms, to gain insight that could lead to clues regarding drug discovery, to better inform precision medicine or to aid in the diagnosis of a patient’s condition. This can be done at a far quicker pace than is possible by an individual or even a group of scientists; the amount of data that can be crunched by software is far larger than possible by a person and therefore results can be delivered much more efficiently.

Speaking to Pharmafocus, Mulvany explained how BenevolentAI utilises artificial intelligence for the practical purpose of drug discovery: “Our proprietary AI technology generates ‘usable knowledge’ from vast volumes of unstructured information such as textbooks, formulas, scientific literature, patents, clinical trial information, conversations and images together with a large number of structured data sets.  It works to understand information by employing an array of proprietary deep learning linguistic models and algorithms to analyse and understand context; then reasons, learns, explores, creates and translates what it has learnt to produce unique hypotheses.  We then take these hypotheses and, uniquely for an AI company, validate them with our own scientists and begin to develop them clinically.

“Simply put the technology is looking for what ‘should’ be known from what ‘is’ known – enabling scientists to see deeply into vast research data sets to augment their scientific intuition across the entire knowledge assimilated by the system.  This allows rapid formation and qualification of hypotheses and subsequent innovations in a way then was previously impossible.”

It was at the start of this year that BenevolentAI became the first artificial intelligence-based company to appoint a Chief Medical Officer, when it appointed Dr Patrick Keohane to the role. This method of integrating both the technology with scientific expertise has already paid dividends for the team.

BenevolentAI signed an $800 million deal in 2014 with an unnamed US-based pharma company to sell the rights to two Alzheimer’s disease drug targets. Beyond this, Mulvany outlined their other successes: “We have validated a further 22 hypotheses, which are at various stages of clinical development in the areas of inflammation, neuro degeneration, orphan disease and rare cancers.  We are also entering our first Phase IIb later this year and have a rich portfolio of granted patents covering a large series of compounds.”

It is a sign of how quickly technology is progressing and the possibilities it brings forth that he mentions that it took 10 years to develop 15 drug candidates at his previous company, Proximagen. In comparison, at BenevolentAI, it has taken only four years to identify 24 potentials targets, with the aid of their AI system.

Mulvany predicts that the tech boom is only going to continue to transform the healthcare landscape: “The next five years are going to see more transformation in healthcare than the previous 50.  Much of that transformation will come from technology and much of that technology will be AI-driven – everything from dramatically faster drug development, democratised healthcare, monitoring, diagnosis, advice and personalised medicine, to more sophisticated mobile health, remote clinical consultation and better, closer and more effective patient and machine interaction.

“While AI has broad applications across the healthcare industry, drug discovery offers an area where the technology can truly disrupt the industry and generate significant timesaving, cost reduction and efficacy improvements against the existing drug development process.  Goldman Sachs recently suggested that AI can de-risk the drug discovery and development process, removing $26 billion per year in costs, while also driving efficiencies in healthcare information worth more than $28 billion per year globally.  This is significant at a time when approved drug development costs have increased by 33% since 2010, according to Deloitte.”

The data-driven revolution does not stop at drug discovery – it spreads to almost all aspects of healthcare. AI is already used in hospitals throughout the UK, despite the general public perhaps not being aware of this. A few months ago, Pharmafocus interviewed the AI healthcare firm, Sophia Genetics, which works with hospitals to examine a patient’s genomic data to determine the best form of treatment.

The AI is used in 285 hospitals across the globe and the more it is used, the more information it has to work with. This means that the AI is able to better refine its result through the process of ‘learning’ connections that exist in public health data, while the team behind it are able pursue new avenues to take this intelligence – for instance, into areas like oncology. Uptake of the technology is slowly increasing but the company has only been in existence since 2011, so there is an expectation that the number of its users will increase as time passes to a point, as Mulvany suggested, where it will become ubiquitous over the course of a few short years.

One organisation that is a comparative veteran in the field of data-driven insights is Lhasa Limited. Set up in 1983, the not-for-profit company was originally set up to assist chemists in the design of complex organic molecule syntheses using a computer system known as Logic and Heuristics Applied to Synthetic Analysis, hence Lhasa.

Technology has progressed rapidly since the company’s inception and now the organisation works in the development of expert computer systems for toxicity and metabolism prediction. It provides a number of extensive and continually updated knowledge bases and the software needed to interrogate them. This means that pharma organisations can consult software to ascertain, during the drug discovery process, what the potential toxicology prospect for a particular chemical could be. Lhasa uses the accumulation of scientific data over its years of existence to predict and analyse this.

Pharmafocus spoke to Lhasa’s current Director of Science,  Dr Chris Barber, (he will be appointed CEO on 13 June), for more details on what their company can offer to companies within the pharmaceutical industry and how the industry is beginning to change in its attitude towards data.

He first explained the range of data they work with: “There are lots of different levels of information. There’s raw data – individual, experimental determination – that is data that can be made public, which Lhasa can work with. However, quite a lot of the data we see is proprietary data. This cannot be published and cannot be put in the public domain and cannot be shared with any competitors. Nonetheless, the knowledge Lhasa’s scientists can derive from this data by looking at that data from multiple organisations can be shared. That’s where the true value of this data comes from. People can spend many thousands of pounds testing compounds, which when you don’t capitalise on all of this information, it can be wasted.”

Barber was keen to stress that companies are now much more willing to share the information they create in attempt to negate the risk of wasting valuable knowledge being created by trials. Lhasa has emerged as a kind of mediator between pharma companies, allowing them to share data in order to produce safer drugs. It works on a quid pro quo basis and Barber explained how it is now a strength to share, when once it could have been viewed as a weakness: “Working with pharmaceutical companies’ proprietary data is becoming easier then has it been in the past, and it’s becoming more accepted and more recognised that there is power in data sharing at pre-competitive levels. Most companies still take a very cautionary approach but the way to view it is if a company says: ‘We don’t want to share any of our data.’ What they’re actually saying is their data is valuable to them and gives them a competitive advantage. However, if once you frame it to say: ‘If you don’t share your data then you won’t see data that other companies are sharing.’ So, not being part of consortium actually comes at a cost. For example, if there are a number of varied companies sharing their data then there are a number of areas that you can learn from and not make the same mistakes; the one company that won’t share its data then is placed behind the other companies in terms of knowledge. When it’s framed like this, it’s easy to understand that data sharing offers more value than any possible risks.”

This analysis effectively explains how the power of sharing data leads to real insights. Accumulating and hoarding data no longer works when there is such a range of information to work with. It requires specialised knowledge, whether through specific algorithms or software, to make the most of the intelligence being produced. This is why so many partnerships between companies involved in processing data and pharmaceutical companies are springing up. It is not feasible to manage these tasks in-house – the most pragmatic option is to reach out to other companies to help in the time-consuming and specialist process. Once the initial step has been taken to release data to other companies, there is a reduced reluctance to share beyond this one-way relationship and to forge a network, as is the case with Lhasa.

Barber explained the range of material that is on offer to the company and how each level offers something unique: “We work with public or non-proprietary data that our scientists have extracted from literature and publications, which is then put into a structured, searchable database. This data is more accessible than having to trawl through the literature to find it. It’s important to recognise that there’s always errors in data so having experts collect and curate that data means the number of errors drops. Even when there are public datasets, we’ll have to go through to correct and curate the data to make it more accurate. This is the easiest way of sharing data.

“We also operate some consortia where companies work together and they share within that group the datasets themselves. There are particular areas where the companies want to share data but don’t want to make it public – there would be little incentive if one company shared all their data to publish some of their data in exchange for access to the original set. It encourages everyone to contribute equally, as a consequence of that companies get together. We have one area in mutagenic impurities and by sharing data on that, companies don’t have to check it for themselves as they can look at datasets that have been curated and put into database instead.

“Another way is using proprietary data to share knowledge around an area to learn more broadly about trends. For example, without being specific, it could be revealed that ‘compounds like this’ – with these features – have this risk associated with them.”

The theoretical side of sharing data clearly holds great potential and attraction for the industry. In some ways, the work Lhasa do could even be seen as model and the concept of sharing data, or even drug compounds, is one that is gaining greater interest. The NCI drug formulary, launched as part of the National Cancer Institute’s Cancer Moonshot, also involved sharing drug compounds, and the data gleaned from them, to explore the possibilities of using certain drugs in combination.

Practically, what Lhasa helps companies achieve is to make their drugs safer and through the sharing process, make the collective industry’s compounds more so. Barber discussed how Lhasa is able to use the data to make a practical difference and what its knowledge offers companies: “One of the large areas for us is mutagenic impurities, where certain chemicals have the potential to cause cancer. Whenever a new drug is created, it’s very likely to contain impurities, trace amounts of other chemicals that cannot be removed during synthesis. However, you do want to know if these chemicals themselves can increase the risk of cancer being formed in patients who are taking these drugs. There are very strict regulatory guidelines that allow you to determine what thresholds can be permitted in a drug without significantly increasing the risk of cancer. If you know such impurities exist in such chemicals, if somebody else has taken the time to extract or purify this impurity having access to such data can save you months of time.”

Time is the key when companies talk about using data and their means of filtering that data. The power of computers has risen to such an extent that we can now process unimaginable sums of data in short spaces of time. In an industry built on firsts, (to discover a compound, to have their application approved on, to get to market) time is crucial and, therefore, so is using the emerging technologies that can speed up traditional methods of doing business. Ken Mulvany’s prediction that we will not recognise the healthcare industry in five years’ time may seem unrealistic, but stranger things have happened. As we entered the new millennium, genomic sequencing cost $100 million per person and 15 years later? The cost was just $1000. This speed of improvement will be seen across the board in the coming years, it is imperative that, to face the growing healthcare challenge, that everyone gets on board with the change.

Ben Hargreaves

Related Content

louis-reed-pwckf7l4-no-unsplash_5

GSK shares data from global shingles survey

GSK has announced data from its new global survey about shingles. The data suggests that …

Expansion for first FDA-approved video game for ADHD treatment

Akili Interactive has announced the expansion of its prescription video game treatment, EndeavorRx, with the …

Working mum develops COVID-19 vaccination database in Tokyo

A 36-year-old former English teacher living in Tokyo, Japan, has launched her own healthcare database, …

Latest content