shutterstock_212432119

What’s driving change in drug discovery and development?

pharmafile | November 30, 2017 | Feature | Business Services, Manufacturing and Production, Medical Communications, Research and Development, Sales and Marketing Big Data, Lhasa limited, charity, drug discovery, pharma, pharmacovigilance 

Dr Chris Barber, CEO of not-for-profit organisation and educational charity Lhasa Limited, discusses new innovations and ideas that are set to affect the drug discovery space.

Drug discovery is a fast-moving field, with the past decade alone witnessing huge advances in both scientific research and technological innovations which hold the potential to increase efficiencies, success-rates and novel medicines across a growing range of indications.

In our rapidly changing sector, it’s crucial that we harness the latest technologies and approaches in order to drive improvements. At Lhasa Limited, we recognise three significant ideas with the potential to positively transform the industry over the coming years:

Artificial intelligence (AI) and machine learning from big data

According to approximations by the Association of the British Pharmaceutical Industry, it takes 12 years and costs an average of £1.15 billion for a drug to be brought to market. As costs continue to increase, AI has the potential to increase efficiency – for example, by more effectively learning from the existing body of knowledge and data than any human could hope to do unaided.

The distinction between AI and machine learning is often poorly made – our view is that machine learning is encoding a predefined algorithm into a computer, which is then ‘fed’ data to build a model. In contrast, AI attempts to emulate human thinking by enabling the computer to identify the appropriate approach given the question and the available data.

At present, machine learning requires data of a minimum size and consistency in order to produce reliable results. Unfortunately, in many cases, there are simply too few well-structured and consistent data sets to allow machine learning to work outside narrow areas of chemical space. However, AI methods are becoming increasingly effective at learning from the much larger sets of unstructured data to build statistically-based models which an expert can then verify as being scientifically relevant. Experts can use these as predictive models in their own right, or as a spur to further knowledge discovery.

With the emergence of techniques to absorb big data – which in our field can include text data from published articles and reports or any one of the ‘omics datasets – companies can apply evolving AI techniques to gain new understandings to support drug development.

A mechanistic view of toxicity

A mechanistic view of toxicity has always been desirable, but it is only recently that approaches to make this tractable have gained the necessary traction to receive a more widespread adoption.

This framework, termed Adverse Outcome Pathways (AOP), shares much with the concept of Modes of Action (MOAs), both of which can help us to relate the causes of toxicity to toxicity effects that may be observed. This in turn, has fuelled the development of Integrated Approaches to Testing and Assessment (IATAs) and other testing strategies that can absorb information from cheaper and simpler assays rather than being wholly dependent upon the observations from complex in vivo assays.

AOPs capture the cascade of key events from a Molecular Initiating Event (MIE) to an Adverse Outcome (AO), expressing the toxicity in a biological context. MOAs are similar, though they are chemical-specific. The cascading nature of the events mean that although pathways from an MIE to an AO can be devised, in reality, these form part of a more complex network of events, which can also be viewed quantitatively from the perspective of systems pharmacology.

As an industry, we can use the knowledge captured by pathways to express an understanding of toxicity. However, the pathways’ key events are typically not measurable directly, and assays for both in vitro and in vivo are proxies for these key events, with greater or lesser reliability.

We can use statistical relationships between assays and their adverse outcomes to recognise the assays’ reliability (including sensitivity and specificity). Reliability may be innate to the assay, but may also vary by chemical. Using both the knowledge of the biological context of an assay and its statistical relationship to other assays can allow the selection of the most appropriate and informative assay for a particular situation.

Ultimately, events in a toxicity pathway can be considered as hurdles that need to be cleared, or defences that need to be defeated, in order that the toxicity is expressed. In these cases, the dose information and dose-response information might be used to provide quantitative relationships. However, this is a longer-term goal that will need an understanding of pathways leading to toxicity that is both deeper and wider than we currently have.

Acceptance of in silico predictions

In silico predictions can offer many benefits over other testing methods in addition to a desirable reduction in animal testing. In silico predictions are often cheaper to conduct (both in time and money), and they can also be more reproducible and more relevant than other methods. The lack of reproducibility of wet assays is a significant cost since it drives the need for replicate studies and any in vitro or animal in vivo model is only a surrogate for human toxicity and may not accurately predict the effects that would be seen in man. This latter challenge of course also applies to in silico models and is one of the most difficult questions to answer. A large step to defining what is needed have been captured by the OECD’s five principles for the validation of (Q)SAR models, but our experience within Lhasa suggests that these are still not sufficient and demands a shift in thinking from ‘when can the model be used?’, to ‘can I trust this specific prediction?’

In our experience, users require:

–          A biological (mechanistic) explanation of the model

–          An explanation of how the predictions have been derived covering the algorithm used or the rules invoked, including any training sets and assumptions about how predicted values can be modelled

–          The model’s historical performance using both internal and external validation sets to show how well the model can capture the endpoint

–          A measure of confidence in the case of any specific prediction – how likely is the model to give the right answer given appropriate measures of similarity to analogues known to the model (builder) and the consistency with which they show a similar outcome?

–          A sufficient quantity of transparent supporting data and knowledge that allows an expert to review and decide when to accept or overturn the model’s prediction.

The level of transparency and accuracy will depend upon the specific decision being made – prioritisation decisions will have much lower thresholds than ‘regulatory decisions’ – those made by regulators that are the final arbiters before permitting exposure to humans. Sufficient support is then required to both make and defend a decision in order to minimise human risk. If in silico predictions are to replace either in vitro or in vivo experiments, then the accuracy of a negative prediction is crucial. If a negative prediction is to be accepted, then some means of understanding the risk of missing potential new route to toxicity must be clear. So far, in silico predictions have been accepted by regulators for the prediction of genotoxic impurities under ICH M7 guidelines, and it is expected that other endpoints will follow – currently skin sensitisation is close with in silico models being able to suggest which assay or combination of assays can be used in lieu of animal testing.

As the application and acceptance of in silico models increases, then the ability to make earlier decisions about the potential to progress a compound can take place efficiently.

The combination of these three key developments – AI and machine learning from existing data and knowledge, the concept of describing the pathways for toxicity to help us understand when assays are appropriately predictive of human toxicity, and the acceptance that sometimes in silico models can be safely used in place of wet assays – offer great opportunities for accurately predicting human toxicity risks that will increase efficiency and safety. Of course, this is not just to the benefit of the pharmaceutical industry but for any industry where the risks consequent to human exposure of chemicals needs to be predicted.

Related Content

PeptiDream and Novartis extend peptide discovery collaboration

PeptiDream has announced the expansion of its peptide discovery collaboration with Novartis Pharma AG.

Allumiqs and Prolytix enter partnership for acceleration of drug discovery and development

Allumiqs and Prolytix have announced that they have entered into a long-term, strategic partnership with …

Isomorphic Labs to collaborate with Eli Lilly

Isomorphic Labs has announced that it has entered into a strategic research collaboration with Eli …

Latest content