The Artificial Intelligence Executive Order: What does it mean for FDA and its regulated industries?
Life Sciences
| By LAURA DIANGELO, MPH

Last week, President Biden signed a new executive order on a federal approach to artificial intelligence (AI) regulation. The document outlines sweeping whole-of-government approaches for AI policy, including some direct tasks that are life sciences-related.
President Biden’s Executive Order (EO) on “Safe, Secure, and Trustworthy” development and use of AI
- The Executive Order was signed on October 30 and then followed by an implementation guide from the White House Office of Management and Budget (OMB). At a high level, the order lays out a variety of tasks and initiatives or a whole-of-government approach to AI regulation.
- Quick recap: What an EO is, and what it’s not. An EO is a directive from the president to the administrative branch of government to issue or consider certain policies, publish certain documents, and/or invest in specific policy or research areas. In effect, an EO can function as a roadmap for federal Departments and agencies on a specific subject, with the White House tasking entities within the federal government to undertake certain activities on a subject within a set timeline. An EO does not update any statutory authorities – rather, it directs agencies to work within their existing powers and authorities as part of a coordinated regulatory effort or to identify areas in which a statutory update could be needed. An EO can also direct certain agencies or Departments to work together on an issue in a particular way, establishing a chain-of-command or clear delineation in regulatory priorities between agencies that might have overlapping mandates.
- This new EO focuses on AI regulation and research across a wide swath of industries, including how AI would be deployed in housing, defense, education, and finance. The EO directs Secretaries of multiple Departments – including Defense, Energy, Commerce, Homeland Security, and Health and Human Services (HHS) – to coordinate on a federal AI approach. This includes the development of guidelines and standards for AI “safety and security” and requires some AI tool developers to submit information about their models to the Secretary of Commerce under the Defense Production Act (see below).
- A quick point of order: The National Institute of Standards and Technology (NIST) is at the center of the EO. The EO relies on standards, guidelines and definitions from NIST, with NIST engaged in nearly all the activities that are directed under the order. This includes a directive that NIST develop the “guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.” These activities would then be supportive of other agencies’ and Departments’ work under the EO. This is already well within NIST’s wheelhouse, which launched its Trustworthy and Responsible Artificial Intelligence Resource Center (AIRC) in March 2023. The AIRC coordinates work on the NIST AI Risk Management Framework – the first version of which was issued in January 2023 and accompanied by a “Playbook,” or companion resource that offers an implementation guide.
- A top line directive of the EO: Federal review of “foundation” models, or “dual-use foundation models.” Per the Order, such a tool would be “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.” Under the Defense Production Act (DPA) authority, the EO would establish a system by which developers must notify the federal government when training such a model and share the results of safety tests.
There are some health care- and life sciences-specific tasks in the Executive Order
- While the AI EO asks for a litany of work across the federal government, there are some life sciences-specific tasks. The AI EO includes a variety of activities related to health care generally, including directions for HHS and other Departments with health care programs (Department of Defense, Veterans Health Administration) to develop frameworks for identifying health care bias from AI deployment and ways to identify and track “clinical errors” resulting from AI, “including through bias or discrimination.”
- Up first, an HHS AI Task Force. The EO directs HHS to establish a task force “that shall, within 365 days of its creation, develop a strategic plan that includes policies and frameworks—possibly including regulatory action, as appropriate—on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health), and identify appropriate guidance and resources to promote that deployment.”
- This is an extremely broad mandate for a task force, which would have one year (from the time it is stood up, which should be “within 90 days” of the EO) to issue such a “strategic plan.” The strategic plan would lay out the kinds of guidance documents or rules that the agencies in HHS would need to develop to address the priorities identified in the plan. While some of these activities are outside of the scope of the FDA’s work – for example, health care delivery and financing and public health, which would include things like quality measures and benefits administration – the far-reaching strategy would also need to outline next steps for regulating AI in “research and discovery” and “drug and device safety.” There is limited detail on what, exactly, would be included here, with the EO outlining components such as “work to be done” with local governments and “long-term safety and real-world performance monitoring of AI-enabled technologies.”
- The strategic plan will incorporate performance monitoring for AI-enabled technologies. This includes setting metrics for bias and discrimination that could result from (or be exacerbated by) deploying algorithms, as well as how to monitor long-term safety and real-world performance – including “a means to communicate product updates to regulators, developers, and users.”
- There’s also an assessment of how AI-enabled tools impact quality: This workstream, due within 180 days of the EO, would have HHS develop “a strategy” on how AI-enabled tools impact health care quality to build an “AI assurance policy.” This includes the identification of “infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare-technology algorithmic system performance against real-world data.” Further, HHS, DoD and VHA are directed to “establish an AI safety program” that would define, identify, and monitor AI-related safety issues. This includes a “common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm.”
- Strategy on regulating the use of AI or AI-enabled tools in drug development. HHS is directed to “develop a strategy for regulating” AI in drug development, which would include “objectives, goals, and high-level principles required for appropriate regulation” and identifying areas where additional guidance, rulemaking or authority would be needed. Notably, this is a strategy for regulating, not an actual regulatory strategy for AI in drug development. In effect, the EO directs HHS to determine what types of regulatory approaches would fit here, and whether the Department (and FDA) has sufficient authority in this area.
- Another priority: “Reducing risks at the intersection of AI and [chemical, biological, radiological and nuclear] CBRN threats.” The EO also directs HHS to work with other federal Departments on “reduc[ing] the risk of misuse of synthetic nucleic acids,” or recombinant nucleic acids, by establishing a framework “to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms, including standards and recommended incentives.” This would include standardizing methodologies for due diligence in procurement, identifying biological sequences that could be of concern, and implementing new best practices for managing sequences-of-concern (see HHS’ current guidance for synthetic nucleic acids on sequences of concern here).
- A report on the intersection of patents and AI. While not directly life-sciences focused, the AI EO directs the Commerce Department and U.S. Patent and Trademark Office (USPTO) to develop guidance on patent implications for AI technologies, “including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship ought to be analyzed.” Further, the USPTO is tasked with developing more extensive guidance on “the intersection of AI and [intellectual property] IP,” and working with the Copyright Office to “issue recommendations to the President on potential executive actions relating to copyright and AI.”
- Implementation guidance: The White House Office of Management and Budget (OMB) published guidance on implementing the AI EO at the federal level. The draft memo outlines how agencies should think about “risks specifically arising from the use of AI.” Of note, in the case of the FDA, this work would be done at the HHS level. This involves the establishment of an HHS Chief AI Officer (CAIO) in the next 60 days, who will “work in close coordination” across Departments. The HHS CAIO is tasked with managing the work streams in the EO, tracking their progress, and engaging internally with their Department. They also have some specific to-dos, including the issuance of a report on “identifying and removing barriers to the responsible use of AI” and a landscape analysis of current “and planned top use cases” of AI, as well as a plan for building new capacity on AI-related issues. Under this guidance, certain uses would be considered “safety-impacting” or “rights-impacting,” and therefore require additional scrutiny. For HHS, this would include, among others, physical movements, delivery of drugs or biologics, decisions regarding medical devices, clinical diagnosis and diagnostic tools. This additional scrutiny is referred to as “minimum practices” and includes adherence to NIST standards, conducting ongoing monitoring to deployed AI and promoting end-user transparency. Notably, many of the concerns flagged in the OMB guidance are already part of routine FDA review of regulated products and activities.
Analysis
- This work is fairly high level, starting with the establishment of a task force that would be able to set baseline expectations for what policymaking would look like in this area – including both guidance and rulemaking.
- One key theme: Build the infrastructure to monitor AI tools post-deployment or figure out what infrastructure might be needed. Several of the tasks for HHS appear intended to assist in the establishment of frameworks that could track real-world deployment of AI in health care – including defining what an AI-related adverse event could look like, knowing where it would be recorded or how to flag it, and identifying infrastructure that could monitor real-world performance of a deployed model. Notably, for AI-enabled medical devices, this is currently an expectation of the developer, who is tasked with establishing a pre-determined change control plan (PCCP) as part of their pre-market submission. The PCCP itself spells out how the developer will monitor the tool going forward, and sets guardrails around potential updates to the system. For HHS’ part, under the EO the Department would seek to bolster its own frameworks that could track deployed algorithm monitoring – or, at least, identify what those infrastructure needs would be.
- The FDA is already a bit ahead of the curve on some of these issues. In May 2023, the agency issued a discussion paper on the use of AI in drug development, which was later joined by a discussion paper on AI in manufacturing. In addition, FDA staff has published a variety of works outlining points of consideration for using AI in regulated contexts, including opportunities (and challenges) for using AI methods in real-world data (RWD) for safety surveillance, how to consider equity and bias in AI-informing data sets, and how to think about “data drift” (the difference between a training data set and its real-world deployment). The EO itself focuses on what avenues the FDA (via HHS) would have to implement policy in this are – including guidance, rulemaking or legislative changes. While it’s not likely that HHS’ report will fundamentally alter the way that the FDA is looking at its own policy making, it will inform the practicalities of how such policy will be made, and via what policymaking method.
Featuring previous research by Rachel Coe.
To contact the author of this item, please email Laura DiAngelo ( ldiangelo@agencyiq.com).
To contact the editor of this item, please email Alexander Gaffney or Chelsey McIntyre ( cmcintyre@agencyiq.com).