Here’s how global regulators are starting to think about the use of AI in drug development

Life Sciences | By Laura DiAngelo, MPH

Aug. 16, 2024

Artificial intelligence and machine learning (AI/ML) have the potential to harness new data sets, build out novel research methods, and transform the regulatory processes for life sciences products. With rapid innovation in this space, regulators on both sides of the Atlantic are trying to keep up. AgencyIQ has a closer look at some of the policies currently under development and how they might affect the clinical development of drugs and biologics.

What role could AI/ML play in drug development?

  • With the rapid innovation in the field of AI, there’s been significant interest in its use across a variety of industries – including the life sciences. While there are many different definitions of AI and ML, the U.S. Food and Drug Administration (FDA) defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” It further defines ML as “a set of techniques that can be used to train AI algorithms to improve performance of a task based on data.” While the field is still emerging, there are varied use cases for AI/ML methods in drug development, research and oversight.
  • In 2023, FDA put out a discussion paper seeking comment on the role that AI/ML could play in the field of drug development. That paper sought to identify the list of potential use cases for AI/ML in various drug development stages, including drug discovery and clinical and non-clinical research. As the agency described, drug discovery is used for “initial identification of a suitable biological target for drug candidates.” This process includes the analysis and synthesis of huge amounts of data, and new data is becoming available all the time, including ongoing scientific research and “the growth of available genomic, transcriptomic, proteomic, and other data sources from healthy persons and those with a specific disease of interest.” AI/ML tools can help researchers handle these data sets in new ways, with AI/ML “applied to mine and analyze these large multi-omics and other datasets to provide information on the potential structure and function of biological targets to predict their role in a disease pathway” – in short, AI/ML can help researchers better identify and prioritize specific promising compounds by leveraging more complex datasets.
  • In the context of clinical research, FDA believes that AI-enabled tools could inform study designs and help researchers manage and analyze their data. AI-enabled systems can help researchers design trials that mitigate the current challenges many trial sponsors face and enable them to use new sources of data to design and implement smarter trials. For example, FDA explained, “AI/ML is being used to mine vast amounts of data, such as data from clinical trial databases, trial announcements, social media, medical literature, registries, and structured and unstructured data in [electronic health records] EHRs,” which can help sponsors better navigate the landscape of designing and running trials.
  • Another use case is for AI/ML to bolster researchers’ ability to receive and analyze data collected from studies that are leveraging new, digitally enabled methods of data capture. This includes data from digital health technologies (DHTs), such as wearable sensors (e.g., smartwatches), and real-world data (RWD) from Electronic Health Records and claims data. A significant point of interest in the use of AI/ML in data analysis lies in the idea of digital twins, or the use of AI/ML to “build in silico representations or replicas of an individual that can dynamically reflect molecular and physiological status over time,” FDA’s discussion paper explains. That digital twin could “potentially provide a comprehensive, longitudinal, and computationally generated clinical record that describes what may have happened to that specific participant if they had received a placebo.”
  • In effect, AI/ML-enabled methods may be able to help researchers more directly understand and identify their research population by using information that wasn’t previously available to them, build trials that more directly focus on that specific population, help manage the data they receive from new capture methods, and enable innovative new analytical methodologies.
  • However, as with any new method or technological advancement, regulators with oversight of the research and development processes need to help industry understand how existing regulatory frameworks and guardrails apply to these new advancements – and pave the way for them to be implemented into existing research structures in a way that researchers can feel confident won’t go against regulatory expectations.

In the U.S., the FDA has said it is working on a new guidance document on AI/ML in drug development

  • Work on this guidance began in 2023, when President JOSEPH BIDEN signed Executive Order (EO) 14110 on the Safe, Secure, and Trustworthy Development and Use of AI. This comprehensive EO outlined a “whole of government” approach to building out frameworks for the development and use of AI across the federal Departments, as well as the role that AI could play within federal operations. Contained in this EO was a directive for the Department of Health and Human Services (HHS) to “develop a strategy for regulating the use of AI or AI-enabled tools in the drug development process.” Because HHS oversees FDA, and FDA oversees drug development regulation, this directive was understood to apply to the FDA. The directive also stated that the called-for strategy should, “at a minimum… define the objectives, goals, and high-level principles required for appropriate regulation throughout each phase of drug development.”
  • Notably, by the time the President signed the EO, FDA had already put out its initial discussion paper (which we discussed above). However, that paper stopped short of providing any regulatory guidance for industry developers, instead asking more generally whether the use cases that the agency had identified were the same as those industry and/or researchers had in mind.
  • The FDA’s paper went on to highlight the role of standards in this space. The agency touted “efforts for the development of cross-sector and sector-specific standards to facilitate the technological advancement of AI” – including those from the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The EO also directed the U.S. National Institute for Standards and Technology (NIST), an entity within the Commerce Department, to take the lead on several foundational frameworks that could be leveraged across the government on AI. FDA’s discussion paper cites NIST’s work, as well as its mandate under the EO. Notably, NIST has been issuing new AI-related resources at a breakneck pace in recent months, issuing four draft documents in May 2024 (of which three were finalized in July 2024) and two additional draft resources (more on that below).
  • The FDA’s Center for Drug Evaluation and Research (CDER), which as the name implies oversees the regulations of drugs at the agency, indicated in its 2024 guidance agenda that it intends to issue guidance on the subject this year. While the guidance agenda is not binding (i.e., CDER doesn’t have to issue everything on the list), it does provide some good insight into the topics that are top-of-mind for the agency and for which regulators think industry needs more regulatory clarity. The 2024 agenda from CDER previews that the agency is working on a document titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drugs and Biological Products.”
  • At a recent workshop on AI/ML in drug development co-hosted by FDA and the Clinical Trials Transformation Institute (CTTI), CDER Director PATRIZIA CAVAZZONI previewed the approach that industry can expect in the guidance. Cavazzoni described it like this: “That risk-based approach will be first and foremost centered on the context of use and then, in conjunction with that, will be really anchored by our assessment of the model risk, which will be fundamentally predicated on the model influence and model consequence. So, we will plan to take a sort of multimodal risk-based approach as we think about how we will be reviewing AI and machine learning elements in submissions and anything that comes our way that is part of a program.” [See AgencyIQ’s full analysis of that meeting here.]
  • In effect, the agency is considering a multimodal, risk-based approach. The guidance will likely be broken down by the stage at which a sponsor is using AI-based or -enabled methods, and then the relative risks of the model itself, such as what impact a model not performing as intended could have (e.g., missing or misleading data versus patient safety concerns).
  • Notably, international regulators are considering a similar approach. The EU’s European Medicines Agency (EMA) also put out a discussion paper on AI/ML in drug development in 2023. That paper followed a 2021 coordinated plan on AI, which was issued with a proposal for a regulation that became the AI Act (more on that below).
  • The EMA’s reflection paper generally follows the multimodal risk-based model that Cavazzoni previewed at the August 2024 FDA/CTTI workshop. The EMA’s paper walked through the stages of drug lifecycles and described different considerations for the risks of using AI at each stage. For example, it noted that application of AI in drug discovery is likely low risk, as the risk of “non-optimal performance often mainly affects the sponsor.” Conversely, using AI-enabled tools in settings where patients could be impacted, such as treatment assignment or dosing, would be higher risk, and AI/ML in precision medicine “would be regarded as a high-risk use from a medicines regulation perspective, related to both patient risk and level of regulatory impact.” In effect, the risk of using AI at all in that stage would be high, as it would have impact on the way the regulator reviews applications, and the risk of use of the model itself would be high, as it directly impacts patients. While it doesn’t provide significant details on the subject, the EMA’s paper also touches on the second domain of risk that Cavazzoni described in August 2024, the risks of the models themselves. These include concerns such as bias in underlying data and ethical issues, data protection, and the practicalities of model development.

What’s next? New guidance is coming – and from multiple directions

  • Both the FDA and the EMA are working on guidance in the area, citing significant need for regulatory clarity for researchers, developers and drug sponsors that are already heavily investing into this area – as well as those seeking new opportunities. As noted above, the FDA’s CDER included a new document on the considerations for the use of AI in drug development on its guidance agenda. Cavazzoni’s remarks at the August 2024 workshop made it seem that the drafting of the document is underway. EMA’s Methodology Working Party (MWP) also has AI-related documents listed on its work plan (which is similar to a guidance agenda). According to that document, EMA has identified two key areas where “Further specific guidance is required” on the use of AI in medicines regulation. First, “Guidance on the use of AI in clinical development. Topics being considered include the use of AI/ML applications for selecting study sites and study participants, machine-learning derived endpoints and covariates, and digital twin technology (intersecting with guidance on the use of Real-World Data).” Second, guidance on the use of AI in pharmacovigilance, or the process of keeping track of a drug’s side effects.
  • It does seem that both regulators intend to take similar approaches to regulation – that is, they will define risk by stage (discovery, development) and then the risks of the models within particular use cases. Things that would have nonpatient-facing risks, such as discovery decisions, would be considered lower risk. AI-enabled tasks that are likely to have significant influence in what regulators see, or the information presented to them, would be in a higher risk class. Understandably, AI systems that would directly impact patients, including those that would inform their dosing of medicines or assign them to a specific arm of a study, would be considered high-risk by regulators.
  • However, the exact details of each policy will matter greatly. Even though regulators on both sides of the Atlantic have indicated interest in a multimodal framework, the way they interpret or assign risk at different research stages for different AI-enabled methods and actually implement such a system are still to be defined. These frameworks are still nascent, even as AI-enabled tools are racing ahead of regulators’ ability to collect best practices on their application. The way these frameworks are developed and implemented has significant impact on how AI will shape the future of clinical trials going forward – their design, technological capabilities, and even what products and compounds get moved into the development stage. It’s a fast-moving technological space, but regulators are never quite as speedy as computer science.
  • The way AI will be worked into clinical trials going forward also implicates other emerging policy areas. As described above, AI can be viewed as a tool that enables other advancing methods – including the use of DHTs and real-world evidence (RWE). Some of these interactions are already being implemented; for example, FDA’s Office of Surveillance and Epidemiology (OSE) recently touted its use of ML-based Natural Language Processing (NLP) in navigating RWD from EHRs into something useful for regulatory purposes. As that office wrote in its annual report, using NLP for managing RWD from EHRs could “significantly reduce the labor-intensive demands of manual review and to streamline our operations” – in effect, AI/ML methods can help make RWD analyses feasible. Still, FDA’s frameworks for DHTs in clinical research and the use of RWD for regulatory purposes are also currently under construction, and the interplay of new risk-based approaches to AI add a layer of complexity here as well.
  • The life sciences regulators aren’t the only governmental entities that are working on an approach to AI. In the U.S., President Biden’s EO directed the federal government to take a whole of government approach. Going forward, documents and frameworks from federal partners such as NIST are likely to have an increasing influence on the FDA’s own policymaking, adding another layer of complexity for life sciences researchers working to track developments in AI. Notably, HHS recently elevated its Office of the National Coordinator for Health Information Technology (ONC) into an Assistant Secretary position, the Assistant Secretary for Technology Policy (which will be known as ASTP/ONC going forward). ASTP/ONC will head HHS’ work on AI at the Department level, but how it engages directly with FDA, and what role it will have on FDA’s own policymaking, remains to be seen.
  • In the EU, the AI Act was recently formally published in the Official Journal of the EU (in other words, enacted). The AI Act is a “horizontal legislation,” meaning that it applies across sectors rather than vertically to a specific sector. As such, the EU considers the AI Act “the world’s first comprehensive AI law.” The AI Act sets new standards and oversight expectations for tools that leverage AI based on the level of risk that they present. The exact implications for the life sciences industry are mixed, ranging from direct interactions with the medical device regulation (MDR) and in vitro diagnostic regulation (IVDR) to a specific carve-out in applicability (called “Recital 25”) for AI-enabled systems and tools that are “specifically developed and put into service for the sole purpose of scientific research and development” (in effect, those used in clinical trials). The AI Act is still quite new, and regulators at EMA are holding space in their work plans to work on its implementation within their own vertical subject areas. [Read AgencyIQ’s analyses of the European AI Act here and here]
  • What’s next? There’s a lot of policy in the works on this subject from multiple sources, entities and governmental bodies that might not typically be the most familiar to life sciences researchers. Going forward, the way these frameworks are developed, with whom, and their specific approaches and implementation are likely to shape the future of what research looks like.

 

To contact the author of this item, please email Laura DiAngelo ( ldiangelo@agencyiq.com).
To contact the editor of this item, please email Alexander Gaffney ( agaffney@agencyiq.com) and Jason Wermers ( jwermers@agencyiq.com).

Key Documents and Dates

Get an insider’s view on regulatory movements.

Sign up for AgencyIQ’s newsletters to receive exclusive regulatory updates and analysis impacting the life sciences or chemical industry.

Copy link
Powered by Social Snap