ai_graphic

Key legal considerations in accelerating pharma using artificial intelligence

pharmafile | September 4, 2019 | Feature | Business Services, Manufacturing and Production, Medical Communications, Research and Development, Sales and Marketing AI, artificial intelligence, pharma 

Charlotte Walker-Osborn, Partner and International Technology Sector Head and Nabil Asaad, Senior Associate, Intellectual Property Law Group – both from global law firm Eversheds Sutherland – discuss the intricacies of pharma/AI partnerships, including data privacy, cyber security, and intellectual property concerns.

Recent advances in the development of artificial intelligence (AI), coupled with the availability of sufficient computing power to enable their effective use, have now made the application of AI methods to pharmaceutical R&D a realistic prospect. Machine learning tools can identify connections within data sets and AI-based reasoning can be deployed to extrapolate previously unknown connections. There has been a surge in activity in ‘big pharma’ using AI capabilities to identify potential drug targets and to help to identify and design new medicines. A number of high-value collaborations between leading pharmaceutical companies and AI partners have recently been announced. By way of example:

  • AstraZeneca and BenevolentAI are collaborating towards identifying new targets for idiopathic pulmonary fibrosis and chronic kidney disease
  • Gilead and Insitro will use AI to develop new disease models for non-alcoholic steatohepatitis with the aim of developing new therapies
  • Celgene and Exscientia are deploying AI in order to accelerate small molecule discovery in oncology and autoimmunity
  • Exscientia also has collaborations with Roche and Sanofi, as well as recently announcing the identification of a lead compound from its collaboration with GSK

Legal and contractual considerations

Pharma/AI collaborations present a particular set of legal considerations.

Leading pharmaceutical companies are data-rich and have the requisite know-how to develop new medicines starting from identification of an implicated gene, protein, process or pathogen. Whilst many pharma companies already have sophisticated computational (including AI) tools at their disposal, the leaders in the AI field are specialists whose focus has yielded AI capable of analysing vast data sets, including natural language documents. Against that backdrop, it was inevitable that the two would come together in order to deploy leading-edge AI to pharmaceutical R&D.

The approach and outlook of the parties (often led by the AI partner’s particular business focus) will have a significant impact on the structure of the collaboration. On the one hand, the AI partner could simply offer consultancy and a technology platform for use with the pharma partner’s data. On the other hand, the AI partner may wish to conduct its own pharmaceutical R&D and may wish to take learnings from and reserve usage rights in the collaboration outputs.

Intellectual property

AI technologies evolve through use. Iteratively training an AI system using a new data set gives rise to a new model whose properties and behaviour are modified by that training data – the trained model embodies the training data. This creates a tension under the conventional analysis of foreground and background intellectual property (IP) which is usually applied to R&D collaborations. The trained model is clearly foreground IP and an improvement over the AI partner’s background IP, but the trained model inevitably relies, at least in part, on the pharma partner’s data, even if that data cannot be extracted from the trained model. The question therefore arises as to whether or not the AI partner should own the arising intellectual property and any derived data or, if not, be able to use the trained model/derived data outside the collaboration, potentially free of any obligation to the pharma partner.

There is no one rule here. It is ultimately a commercial discussion between the parties. We have seen AI deals where the parties agree to use a private ‘instance’ of the AI technology where the input data, training and the derived data and arising IP/learnings stay within the collaboration. However, doing this likely means your business will also not benefit from any learnings from other partners’ data, if indeed there are other companies providing data and potential training to [AC1] the AI technology for it to improve.

The use of AI technologies can potentially facilitate any phase of the drug discovery process. Where the AI system is being deployed in order to analyse data, for example to prioritise follow-up research efforts, then there will be additional creative and technical input which might generate IP. In such cases, it would be incorrect to conflate the output of the wider collaboration with the output of the AI system. The ownership and exploitation rights to the wider project output can be approached in the same manner as other R&D collaborations and we would recommend these are clearly dealt with separately contractually.

If, on the other hand, the AI is deployed to design small molecule drugs directly, then the question arises as to whether any drugs designed in this way would constitute a machine-made invention and who would be entitled to such an invention. In the UK, patent law does not currently allow for a machine to be an inventor. So, currently, it is open to the collaborating parties to apportion IP as between themselves, but the possible impact of developments in the law, and the potential for different approaches being taken in different jurisdictions, should be kept in mind when establishing collaboration agreements.

As an overarching point, intellectual property law struggles, in many countries, to deal with AI technologies and there will be new laws arising in a number of countries over the coming years as a result, so you should ensure your legal team look at this carefully when working on any collaboration deal. 

Data and Privacy

As with any R&D activities involving data, usage rights for confidential/proprietary information and data privacy issues need to be considered and the parties’ roles and responsibilities clearly defined. In many ways, the analysis is not different to more traditional pharma R&D. The particular applicability of AI to the analysis of genetic data may necessitate an appropriate data privacy regime being put in place. The status of the AI partner as a joint data controller, a data controller in its own right, or the pharma partner’s data processor will also need to be considered depending on the collaboration struck. Indeed, it may be possible to ensure anonymised or pseudonymised data sets are utilised. Even then, there are data privacy issues to address (including the need to ensure that a data subject cannot be re-identified).

Use of personal data by AI is a hot topic. For example, in the UK, the Information Commissioner’s Office (ICO) issued a blog in April: “Automated decision-making: The role of meaningful human reviews.” Additionally, the ICO has also set out an auditing framework for artificial intelligence. Transparency is a key focus. There will almost certainly be more guidance emanating from the ICO and diverging laws across the globe in this area. In summary, ensure you do a careful analysis of the privacy implications of any AI project and ensure your contract with the AI provider allows you to discharge your regulatory and transparency obligations. Also, keep a watching brief on the ethics guidance coming in thick and fast around use of AI for business. 

Technology and cyber-security

Ultimately, AI technologies reside on servers, whether your own, the AI provider’s or in the cloud.

It is critical that a detailed analysis takes place, early on, as to the technology set-up, the data flows, and the security. The analysis is, largely, the same as for other technology projects. It is, however, a critical step, given the large amounts of data, especially if the personal data being used is voluminous and not anonymised.

Fees and royalty arrangements

Financial considerations are outside the scope of this article but, suffice to say, that AI providers have vastly different models and it is important to discuss this element early on, including what success looks like, where payments may be linked to set up costs versus success criteria and where royalties might be pegged to traditional pharma results, such as targets that are taken through to patent grant or clinical trials.

Given the large amounts of data which can be involved, do factor in compute costs. Ultimately, these will likely be passed back to the customer, one way or the other.

This article is, frankly, a mere snapshot of some of the key legal areas to focus on in collaborations of this type. There is a great deal for both parties to gain in this area. Having supported both pharma companies and technology companies, our view is that early engagement on the legal issues is critical to success.

Related Content

AstraZeneca and MSD’s Lynparza receives NICE positive recommendation as a cancer maintenance treatment

UK-based pharmaceutical company AstraZeneca has announced that the National Institute for Health and Care Excellence …

Scientists use AI to find new antibiotic to fight superbug

Scientists at McMaster University and the Massachusetts Institute of Technology (MIT) have utilised artificial intelligence …

pers

Personalised medicine in oncology: Where are we now?

The past decade has seen exponential growth in the field of oncology, with precision medicine …

Latest content