This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “AI EO”), issued by President Biden on October 30, 2023.  The first blog summarized the AI EO’s key provisions and related OMB guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through July 2024.  This blog describes key actions taken to implement the AI EO during August 2024.  It also describes key actions taken by NIST and the California legislature related to the goals and concepts set out by the AI EO.  We will discuss developments during August 2024 to implement President Biden’s 2021 Executive Order on Cybersecurity in a separate post. 

OMB Releases Finalized Guidance for Federal Agency AI Use Case Inventories

On August 14, the White House Office of Management and Budget (“OMB”) released the final version of its Guidance for 2024 Agency Artificial Intelligence Reporting Per EO 14110, following the release of a draft version in March 2024.  The Guidance implements Section 10.1(e) of the AI EO and various sections of the OMB’s March 28 Memorandum M-24-10, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”  The Guidance also supersedes the agency AI use case inventory requirements set out in Section 5 of 2020’s EO 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.”   

The Guidance requires federal agencies (excluding the Department of Defense and Intelligence Community) to submit AI use case inventories for 2024 by December 16, 2024, and to post “publicly releasable” AI use cases on their agency websites.  Appendix A of the Guidance lists information agencies must provide for each AI use case, including information on the AI’s intended purpose, expected benefits, outputs, development stage, data and code, and enablement and infrastructure.  Agencies must also address a subset of questions for AI use cases that are determined to be rights- or safety-impacting, as defined in OMB Memo M-24-10, such as whether the agency has complied with OMB Memo M-24-10’s minimum risk management practices for such systems.  For AI use cases that are not subject to individual reporting (including DoD AI use cases and AI use cases whose sharing would be inconsistent with law and governmentwide policy), agencies must report certain “aggregate metrics.”

In addition to AI use case inventories, the Guidance provides mechanisms for agencies to report the following:

  • Agency CAIO determinations of whether agencies’ current and planned AI use cases are safety- or rights-impacting, as defined in Section 5(b) and Appendix I of OMB Memo M-24-10, by December 1, 2024. 
  • Agency CAIO waivers of one or more of OMB Memo M-24-10’s minimum risk management practices for particular AI use cases, including justifications of how the practice(s) would increase risks to rights or safety or unacceptably impede critical agency operations, by December 1, 2024.
  • Agency requests and justifications for one-year extensions to comply with the minimum risk management practices for particular AI use cases, by October 15, 2024.

NIST Releases New Public Draft of Digital Identity Guidelines

As described in our parallel blog on cybersecurity developments, on August 21, the National Institute of Standards and Technology (“NIST”) released the second public draft of its updated Digital Identity Guidelines (Special Publication 800-63) for public comment, following an initial draft released in December 2022.  The requirements, which focus on Enrollment and Identity Proofing, Authentication and Lifecycle Management, Federation and Assertions, also address “distinct risks and potential issues” from the use of AI and ML in identity systems, including disparate outcomes and biased outputs, Section 3.8 on “AI and ML in Identity Systems” would impose the following requirements on government contractors that provide identity proofing services (“Credential Service Providers” or “CSPs”) to the federal government:

  • CSPs must document all uses of AI and ML and communicate those uses to organizations that rely on these systems.
  • CSPs that use AI/ML must provide, to any entities that use their technology, information regarding (1) their AI/ML model training methods and techniques, (2) their training datasets, (3) the frequency of model updates, and (4) results of all testing of their algorithms.
  • CSPs that use AI/ML systems or rely on services that use AI/ML must implement the NIST AI Risk Management Framework to evaluate risks that may arise from the use of AI/ML, and must consult NIST Special Publication 1270, “Towards a Standard for Managing Bias in Artificial Intelligence.”

Public comments on the second public Draft Guidelines are due by October 7, 2024.

U.S. AI Safety Institute Signs Collaboration Agreements with Developers for Pre-Release Access to AI Models

On August 29, the U.S. AI Safety Institute (AISI) announced “first-of-their-kind” Memoranda of Understanding with two U.S. AI companies regarding formal collaboration on AI safety research, testing, and evaluation.  According to the announcement, the agreements will allow AISI to “receive access to major new models from each company prior to and following their public release,” with the goal of enabling “collaborative research on how to evaluate capabilities and safety risks” and “methods to mitigate those risks.”  The U.S. AISI also intends collaborate with the U.K. AI Safety Institute to provide feedback on model safety improvements.

These agreements build on the Voluntary AI Commitments that the White House has received from 16 U.S. AI companies since 2023.

California Legislature Passes First-in-Nation AI Safety Legislation Modeled on AI EO

On August 29, the California legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047).  If signed into law, SB 1047 would impose an expansive set of requirements on developers of “covered [AI] models,” including cybersecurity protections prior to training and deployment, annual third-party audits, reporting of AI “safety incidents” to the California Attorney General, and internal safety and security protocols and testing procedures to prevent unauthorized access or misuse resulting in “critical harms.”  Echoing the AI EO’s definition of “dual-use foundation models,” SB 1047 defines “critical harms” as (1) the creation or use of CBRN weapons by covered models, (2) mass casualties or damages resulting from cyberattacks on critical infrastructure or other unsupervised conduct by an AI model, or (3) other grave and comparable harms to public safety and security caused by covered models. Similar to the AI EO’s computational threshold for AI models subject to Section 4.2(a)’s reporting and AI red-team testing requirements, SB 1047 defines “covered models” in two phases.  First, prior to January 1, 2027, “covered models” are defined as AI models trained using more than 1026 floating-point operations per second (“FLOPS”) of computing power (the cost of which exceeds $100 million), or AI models created by fine-tuning covered models using at least 3 x 1025 FLOPS (the cost of which exceeds $10 million).  Second, after January 1, 2027, SB 1047 authorizes California’s Government Operations Agency to determine the threshold computing power for covered models.  For reference, Section 4.2 of the AI EO requires reporting and red-team testing for dual-use foundation models trained using more than 1026 FLOPS and authorizes the Secretary of Commerce to define and regularly

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Robert Huffman Robert Huffman

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing…

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing information security and the reporting of cyber incidents, the proposed Cybersecurity Maturity Model Certification (CMMC) program, the requirements for secure software development self-attestations and bills of materials (SBOMs) emanating from the May 2021 Executive Order on Cybersecurity, and the various requirements for responsible AI procurement, safety, and testing currently being implemented under the October 2023 AI Executive Order. 

Bob also represents contractors in False Claims Act (FCA) litigation and investigations involving cybersecurity and other technology compliance issues, as well more traditional government contracting costs, quality, and regulatory compliance issues. These investigations include significant parallel civil/criminal proceedings growing out of the Department of Justice’s Cyber Fraud Initiative. They also include investigations resulting from False Claims Act qui tam lawsuits and other enforcement proceedings. Bob has represented clients in over a dozen FCA qui tam suits.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including those arising under the Buy American Act/Trade Agreements Act and Section 889 of the FY2019 National Defense Authorization Act. In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial products, services, and software. He focuses this aspect of his practice on the overlap of these traditional government contracts IP rules with the IP issues associated with the acquisition of AI services and the data needed to train the large learning models on which those services are based. 

Bob writes extensively in the areas of procurement-related AI, cybersecurity, software security, and supply chain regulation. He also teaches a course at Georgetown Law School that focuses on the technology, supply chain, and national security issues associated with energy and climate change.

Photo of Susan B. Cassidy Susan B. Cassidy

Susan is co-chair of the firm’s Aerospace and Defense Industry Group and is a partner in the firm’s Government Contracts and Cybersecurity Practice Groups. She previously served as in-house counsel for two major defense contractors and advises a broad range of government contractors…

Susan is co-chair of the firm’s Aerospace and Defense Industry Group and is a partner in the firm’s Government Contracts and Cybersecurity Practice Groups. She previously served as in-house counsel for two major defense contractors and advises a broad range of government contractors on compliance with FAR and DFARS requirements, with a special expertise in supply chain, cybersecurity and FedRAMP requirements. She has an active investigations practice and advises contractors when faced with cyber incidents involving government information. Susan relies on her expertise and experience with the Defense Department and the Intelligence Community to help her clients navigate the complex regulatory intersection of cybersecurity, national security, and government contracts. She is Chambers rated in both Government Contracts and Government Contracts Cybersecurity. In 2023, Chambers USA quoted sources stating that “Susan’s in-house experience coupled with her deep understanding of the regulatory requirements is the perfect balance to navigate legal and commercial matters.”

Her clients range from new entrants into the federal procurement market to well established defense contractors and she provides compliance advices across a broad spectrum of procurement issues. Susan consistently remains at the forefront of legislative and regulatory changes in the procurement area, and in 2018, the National Law Review selected her as a “Go-to Thought Leader” on the topic of Cybersecurity for Government Contractors.

In her work with global, national, and start-up contractors, Susan advises companies on all aspects of government supply chain issues including:

  • Government cybersecurity requirements, including the Cybersecurity Maturity Model Certification (CMMC), DFARS 7012, and NIST SP 800-171 requirements,
  • Evolving sourcing issues such as Section 889, counterfeit part requirements, Section 5949 and limitations on sourcing from China
  • Federal Acquisition Security Council (FASC) regulations and product exclusions,
  • Controlled unclassified information (CUI) obligations, and
  • M&A government cybersecurity due diligence.

Susan has an active internal investigations practice that assists clients when allegations of non-compliance arise with procurement requirements, such as in the following areas:

  • Procurement fraud and FAR mandatory disclosure requirements,
  • Cyber incidents and data spills involving sensitive government information,
  • Allegations of violations of national security requirements, and
  • Compliance with MIL-SPEC requirements, the Qualified Products List, and other sourcing obligations.

In addition to her counseling and investigatory practice, Susan has considerable litigation experience and has represented clients in bid protests, prime-subcontractor disputes, Administrative Procedure Act cases, and product liability litigation before federal courts, state courts, and administrative agencies.

Susan is a former Public Contract Law Procurement Division Co-Chair, former Co-Chair and current Vice-Chair of the ABA PCL Cybersecurity, Privacy and Emerging Technology Committee.

Prior to joining Covington, Susan served as in-house senior counsel at Northrop Grumman Corporation and Motorola Incorporated.

Photo of Ashden Fein Ashden Fein

Ashden Fein is a vice chair of the firm’s global Cybersecurity practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Ashden counsels clients…

Ashden Fein is a vice chair of the firm’s global Cybersecurity practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Ashden counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Ashden frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, extortion and ransomware, and destructive attacks.

Additionally, Ashden assists clients from across industries with leading internal investigations and responding to government inquiries related to the U.S. national security and insider risks. He also advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, FedRAMP, and requirements related to supply chain security.

Before joining Covington, Ashden served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks.

Ashden currently serves as a Judge Advocate in the
U.S. Army Reserve.

Photo of Ryan Burnette Ryan Burnette

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain…

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain, artificial intelligence, and software development requirements.

Ryan also advises on Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) compliance, public policy matters, agency disputes, and government cost accounting, drawing on his prior experience in providing overall direction for the federal contracting system to offer insight on the practical implications of regulations. He has assisted industry clients with the resolution of complex civil and criminal investigations by the Department of Justice, and he regularly speaks and writes on government contracts, cybersecurity, national security, and emerging technology topics.

Ryan is especially experienced with:

  • Government cybersecurity standards, including the Federal Risk and Authorization Management Program (FedRAMP); DFARS 252.204-7012, DFARS 252.204-7020, and other agency cybersecurity requirements; National Institute of Standards and Technology (NIST) publications, such as NIST SP 800-171; and the Cybersecurity Maturity Model Certification (CMMC) program.
  • Software and artificial intelligence (AI) requirements, including federal secure software development frameworks and software security attestations; software bill of materials requirements; and current and forthcoming AI data disclosure, validation, and configuration requirements, including unique requirements that are applicable to the use of large language models (LLMs) and dual use foundation models.
  • Supply chain requirements, including Section 889 of the FY19 National Defense Authorization Act; restrictions on covered semiconductors and printed circuit boards; Information and Communications Technology and Services (ICTS) restrictions; and federal exclusionary authorities, such as matters relating to the Federal Acquisition Security Council (FASC).
  • Information handling, marking, and dissemination requirements, including those relating to Covered Defense Information (CDI) and Controlled Unclassified Information (CUI).
  • Federal Cost Accounting Standards and FAR Part 31 allocation and reimbursement requirements.

Prior to joining Covington, Ryan served in the Office of Federal Procurement Policy in the Executive Office of the President, where he focused on the development and implementation of government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.  While in government, Ryan helped develop several contracting-related Executive Orders, and worked with White House and agency officials on regulatory and policy matters affecting contractor disclosure and agency responsibility determinations, labor and employment issues, IT contracting, commercial item acquisitions, performance contracting, schedule contracting and interagency acquisitions, competition requirements, and suspension and debarment, among others.  Additionally, Ryan was selected to serve on a core team that led reform of security processes affecting federal background investigations for cleared federal employees and contractors in the wake of significant issues affecting the program.  These efforts resulted in the establishment of a semi-autonomous U.S. Government agency to conduct and manage background investigations.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.