Skip to main content
Considerations for the Use of AI in Regulatory Decision-Making for Drugs and Biological Products
hlcom-publication-header-image-adobestock_856256941.webp

Considerations for the Use of AI in Regulatory Decision-Making for Drugs and Biological Products

Introduction

Artificial intelligence (AI) is increasingly being integrated into the drug development and regulatory landscape, offering potential advancements in efficiency, accuracy, and innovation. However, its use in regulatory decision-making presents unique challenges and necessitates a structured approach to ensure credibility, safety, and compliance. The U.S. Food and Drug Administration (FDA) has issued draft guidance to provide a framework for sponsors and other stakeholders using AI-generated data or insights in regulatory submissions. This post summarizes the key takeaways from the FDA’s guidance and outlines best practices for establishing AI credibility.

Scope of the Guidance

The FDA’s guidance focuses on the application of AI in generating data or insights that support regulatory decision-making related to drug safety, efficacy, and quality. Importantly, it does not cover AI applications in drug discovery or operational efficiencies such as workflow automation. The guidance provides a risk-based credibility assessment framework designed to help sponsors validate and document AI model outputs in regulatory submissions.

Background

AI’s role in drug development has expanded significantly, with applications ranging from reducing animal testing in preclinical studies to integrating real-world data for regulatory submissions. However, these advancements come with challenges, including:

  • Variability in data quality and representativeness, leading to potential bias
  • Complexity in model development and decision-making transparency
  • Uncertainty in AI model accuracy and performance over time
  • The necessity for lifecycle maintenance to ensure ongoing model reliability

To address these challenges, the FDA recommends a structured credibility assessment framework.

A Risk-Based Credibility Assessment Framework

The FDA outlines a seven-step process for assessing and validating AI models used in regulatory decision-making. The level of oversight and documentation required depends on the model's risk level.

Step 1: Define the Question of Interest

The first step involves specifying the exact question that the AI model aims to answer. For example:

  • Clinical Development Use Case: An AI model determines which patients can be monitored at home rather than requiring inpatient observation for a life-threatening drug reaction.
  • Manufacturing Use Case: An AI model assesses whether a drug’s vial fill volume meets required specifications.

Step 2: Define the Context of Use (COU)

The COU clarifies the AI model’s role and how its outputs will be used. It should include:

  • The specific decision the model informs
  • Whether the model’s output is used alone or combined with other evidence
  • The regulatory setting in which the model is applied

For example, an AI model predicting patient risk for adverse reactions should specify whether it is the sole determinant or used alongside traditional clinical assessments.

Step 3: Assess the AI Model Risk

Model risk is assessed based on two factors:

  • Model Influence: The model’s contribution relative to other decision-making inputs
  • Decision Consequence: The impact of an incorrect model output

A high-risk model, such as one determining patient safety measures, requires more stringent credibility assessment than a low-risk model used as a secondary verification tool in manufacturing.

Step 4: Develop a Credibility Assessment Plan

Sponsors should develop a credibility assessment plan that includes:

  • Model Description: Inputs, architecture, features, and selection rationale
  • Data Description: How training and tuning datasets were obtained and their relevance
  • Model Training: Methodology, validation process, performance metrics, and bias mitigation strategies
  • Model Evaluation: Testing methods, confidence interval assessments, and repeatability studies

The plan should be tailored to the model’s risk level, with high-risk applications requiring more detailed validation and transparency.

Step 5: Execute the Plan

The plan should be implemented, with real-time validation and documentation of AI performance. Engaging with the FDA early can help refine execution strategies and ensure regulatory expectations are met.

Step 6: Document Results and Deviations

Sponsors must document:

  • How the credibility assessment plan was executed
  • Any deviations from the initial plan and justifications
  • Key findings and supporting evidence

This documentation may be included in regulatory submissions or retained for inspections.

Step 7: Determine Model Adequacy

If the AI model does not meet the required credibility standards, sponsors may:

  • Introduce additional supporting evidence
  • Enhance credibility assessment rigor
  • Modify the AI model
  • Adjust its role in decision-making

Special Considerations: Lifecycle Maintenance

AI models may evolve over time due to new data inputs and environmental shifts. Sponsors should implement lifecycle maintenance strategies, including:

  • Continuous performance monitoring
  • Re-validation of AI model outputs as needed
  • Documentation of AI modifications and their regulatory impact

In pharmaceutical manufacturing, for example, lifecycle maintenance ensures that AI-based quality control systems remain accurate despite process changes.

Early FDA Engagement

Sponsors are encouraged to engage with the FDA early in AI model development. Various engagement pathways exist, depending on the AI model’s application, including:

  • Clinical Trial Innovation Programs
  • Model-Informed Drug Development Meetings
  • Digital Health and Emerging Technology Programs
  • Real-World Evidence Consultation

Early engagement helps clarify regulatory expectations and refine credibility assessment plans before formal submission.

Conclusion

AI has the potential to revolutionize drug development and regulatory processes, but its credibility must be established through rigorous validation, risk assessment, and lifecycle monitoring. The FDA’s draft guidance provides a structured framework to ensure AI models meet regulatory standards while fostering innovation in the field.

By adhering to this risk-based framework, sponsors can confidently integrate AI into regulatory decision-making, ultimately enhancing drug safety, effectiveness, and quality.