Guideline for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products

Guideline for the Use of Artificial Intelligence (AI) to Support Regulatory Decision-Making for Drug and Biological Products

Artificial Intelligence (AI) is rapidly transforming various industries, including the pharmaceutical sector. Regulatory authorities are increasingly exploring AI-driven tools to enhance decision-making processes related to drug and biological product approvals. However, the integration of AI in regulatory science requires careful consideration to ensure transparency, reliability, and compliance with established guidelines.

Guideline for the Use of Artificial Intelligence (AI) to Support Regulatory Decision-Making for Drug and Biological Products
Guideline for the Use of Artificial Intelligence (AI) to Support Regulatory Decision-Making for Drug and Biological Products

on the use of artificial intelligence (AI) to produce information or data intended to support regulatory decision-making  regarding safety, effectiveness, or quality for drugs.

AI refers to a machine-based system that can, for a given set of human- defined objectives, make predictions, recommendations, or decisions influencing real or virtual  environments.

A subset of  AI that is commonly used in the drug product life cycle is machine learning (ML). ML refers  to a set of techniques that can be used to train AI algorithms to improve performance at a task  based on data. Although ML is currently the most utilized AI modeling technique in the drug  product life cycle, this guidance focuses on AI models more broadly.

Continuous  advancements in AI hold the potential to accelerate the development of safe and effective drugs and enhance patient care.

The risk-based credibility assessment framework outlined here follows a 7-step process designed to evaluate and establish the credibility of an AI model’s output for a specific Context of Use (COU) based on its associated risk.

The Focus of the Guidance

This draft guidance focuses on the application of AI models that produce data or insights supporting regulatory decisions on drug safety, effectiveness, and quality. It does not cover AI applications in drug discovery or operational processes that do not directly impact patient safety, product quality, or study reliability. The recommendations apply throughout the entire drug lifecycle, including preclinical research, clinical trials, postmarket surveillance, and manufacturing.

A Seven-Step Risk-Based Framework for AI Credibility

To establish the credibility of AI models in regulatory contexts, the FDA proposes a seven-step risk-based framework:

  1. Identify the Regulatory Question
Clearly articulate the regulatory question the AI model aims to address, integrating evidence from various sources, including clinical studies and laboratory data.
  1. Specify the Context of Use (COU)
Outline the model’s purpose and its impact on decision-making. Specify whether it will be used independently or in conjunction with other evidence.
  1. Evaluate AI Model Risk
Assess the model’s potential risk based on:
  • Model Influence: The degree to which the AI model’s output contributes to regulatory decisions.
  • Decision Consequence: The potential impact of incorrect decisions based on the model’s results.
  1. Develop a Credibility Assessment Plan
Create a comprehensive strategy to validate the model’s credibility, outlining its architecture, data sources, and performance evaluation criteria.
  1. Implement the Plan
Carry out the assessment plan, with proactive engagement with the FDA recommended to align expectations and address any challenges early.
  1. Record and Report Results
Document the findings of the credibility assessment, noting any deviations from the original plan and providing comprehensive evidence of the model’s reliability.
  1. Assess Model Suitability
Assess whether the AI model is appropriate for its intended use. If any deficiencies are identified, sponsors may:
  • Reduce the model’s influence by integrating additional data.
  • Strengthen validation processes.
  • Implement risk mitigation strategies.
  • Refine the model or adjust methodologies.
  • Continuously reassess the model until it meets the required standards.
Ongoing Monitoring and Maintenance of AI Models

Because AI models can evolve, the guidance stresses the need for continuous monitoring, regular evaluations, and timely updates. A risk-based approach should guide ongoing lifecycle management to maintain consistent and accurate performance.

Encouragement for Early FDA Collaboration

The FDA strongly recommends that sponsors engage in early discussions when developing AI models for regulatory use. Early collaboration helps define clear validation expectations and proactively address potential challenges. The agency provides multiple engagement opportunities, including formal meetings and consultative programs.

Potential Industry Impact

This guidance is set to have a profound effect on pharmaceutical and biotech companies by promoting responsible innovation and establishing clear pathways for AI integration. Early adopters of these standards can gain a competitive edge by optimizing regulatory processes while maintaining compliance.

Challenges and Considerations for Implementation

Adopting this framework may pose challenges, such as validating AI models, safeguarding data privacy, and seamlessly integrating AI into existing regulatory and operational workflows. To meet regulatory expectations, organizations must invest in rigorous validation measures and establish continuous monitoring processes.

Global Regulatory Context

The FDA’s approach to AI regulation aligns with growing global efforts to manage AI use in healthcare. Comparing this guidance to international standards, such as those from the European Medicines Agency (EMA) and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), can offer a broader understanding of global regulatory trends in AI applications for drug development.

Future Prospects

As AI technology advances, its role in regulatory decision-making will continue to expand. Agencies must proactively adapt to technological developments by fostering research collaborations, updating regulatory guidelines, and investing in AI literacy among regulatory professionals.

The successful integration of AI in regulatory science depends on a balanced approach that prioritizes transparency, accuracy, and ethical responsibility. By addressing these key considerations, AI can significantly improve the efficiency and effectiveness of drug and biological product regulation, ultimately benefiting public health and innovation in the pharmaceutical industry.

Conclusion

The FDA’s draft guidance offers a comprehensive framework for integrating AI into regulatory decision-making in drug and biologic development. By emphasizing a risk-based approach, transparency, and lifecycle management, the guidance ensures that AI is applied responsibly, balancing innovation with the highest standards of safety and efficacy throughout the drug development process.

References: https://www.fda.gov/media/184830/download
Book a Demo