This is part of an ongoing series of Covington blogs on the implementation of Executive Order No. 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “AI EO”), issued by President Biden on October 30, 2023.  The first blog summarized the AI EO’s key provisions and related OMB guidance, and subsequent blogs described the actions taken by various government agencies to implement the AI EO from November 2023 through October 2024.  This blog describes key actions taken to implement the AI EO during November 2024 and potential implications of the 2024 U.S. election.  We will discuss developments during November 2024 to implement President Biden’s 2021 Executive Order on Cybersecurity in a separate post. 

NIST Issues Final Report on Synthetic Content Risks

On November 20, the National Institute of Standards & Technology (“NIST”) published the final version of NIST AI 100-4, Reducing Risks Posed by Synthetic Content, following a request for information in December 2023 and a draft for public comment in April 2024.  The final report fulfills § 4.5(a) of the AI EO, which requires the Secretary of Commerce to submit a report identifying existing and potential “standards, tools, methods, and practices” for authenticating, labeling, detecting, and auditing synthetic content and preventing the production of AI-generated child sexual abuse material (“CSAM”) and non-consensual intimate imagery (“NCII”). 

While noting that there is “no silver bullet to solve the issue of public trust in and safety concerns posed by digital content,” the report identifies “provenance data tracking” (e.g., watermarks and digital signatures) and “synthetic content detection” as two state-of-the-art approaches for ensuring “digital content transparency” and reducing synthetic content risks.  The report describes technical methods for ensuring robust and secure synthetic content watermarking and digital signatures, and outlines types of algorithms that may be used to distinguish synthetic images, video, audio, and text.  Additionally, the report discusses hashing, filtering, testing, and other safeguards to prevent the creation of CSAM and NCII using generative AI tools.

Department of Education Issues Guidance on Discriminatory Uses of AI

On November 19, the Department of Education’s Office of Civil Rights (“OCR”) released guidance on “Avoiding the Discriminatory Use of Artificial Intelligence.”  The new guidance implements § 8(d) of the AI EO, which requires the Secretary of Education to develop “resources [that] address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities.”  Noting that federal civil rights laws “apply to discrimination resulting from the use of AI,” the guidance provides 21 examples of uses of AI in educational settings that could result in an OCR investigation under Title VI of the Civil Rights Act, Title IX of the Education Amendments of 1972, or § 504 of the Rehabilitation Act.  Examples of potential violations of these laws include the use of racially biased facial recognition technology, failure to respond to AI-generated deepfakes of students, and the use of AI systems for admissions or disciplinary purposes that fail to account for students’ disabilities.

Departments of Commerce and State Host Inaugural Meeting of the International Network of AI Safety Institutes

On November 20, the Departments of Commerce and State convened the inaugural meeting of the International Network of AI Safety Institutes (the “Network”) in San Francisco, California.  The two-day meeting, which included AI developers, academics, scientists, and business and civil society leaders, convened with the goal of “address[ing] some of the most pressing challenges in AI safety and avoid[ing] a patchwork of global governance that could hamper innovation.”  At the meeting, the Network members—Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the UK, and the U.S.—focused specifically on managing synthetic content risks, testing foundation models, and conducting risk assessments for advanced AI systems.

Ahead of the meeting, the Network members issued a joint mission statement that identified four initial priority areas for collaboration:  AI safety research, best practices for AI testing, common approaches for interpreting AI tests, and global information sharing.  Network members also committed to over $11 million in funding for a “joint research agenda” on mitigating synthetic content risks through content labeling techniques and model safeguards and issued a joint statement on risk assessments for advanced AI systems.

U.S. AI Safety Institute Establishes Inter-Agency Taskforce on AI National Security and Public Safety Risks

On November 20, the U.S. AI Safety Institute (“U.S. AISI”) announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which will be chaired by U.S. AISI and include representatives from the National Institutes of Health and the Departments of Defense, Energy, and Homeland Security, with more federal agencies expected to join in the future.  The goal of the TRAINS Taskforce will be to coordinate research and testing of advanced AI models across national security and public safety domains and prevent adversaries from misusing AI to undermine U.S. national security.  According to U.S. AISI, the taskforce implements the “whole-of-government approach to AI safety” directed by the White House’s AI National Security Memorandum, issued in October and previously covered here.

Potential Shifts in U.S. AI Policy Under the Incoming Trump Administration

Following the election of President-Elect Trump and Republican majorities in both houses of Congress, AI industry stakeholders anticipate significant changes to U.S. AI policy in 2025, including the revocation of the AI EO.  It is unclear, however, whether the incoming administration will maintain or discontinue the over 100 other federal agency actions that have been completed pursuant to the AI EO.  While the incoming administration is likely to halt ongoing Commerce Department rulemaking to implement the AI EO’s dual-use foundation model reporting and red-team testing requirements—previously covered here—efforts to promote private-sector innovation, AI R&D, and competition with China are expected to continue.  On November 26, the Council on Foreign Relations issued three recommendations for the incoming Trump Administration: the creation of an AI Commission to ensure AI safety, investments in AI research by universities and federal labs, and energy policies that meet the growing energy demands of AI data centers while reducing costs.  We will address the second Trump Administration’s likely approach to AI, including its possible consistency with the first Trump Administration’s AI Executive Order No. 13859 and Executive Order No. 13960, and recent statements by the president-elect in greater detail in future blogs and alerts.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Robert Huffman Robert Huffman

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing…

Bob Huffman counsels government contractors on emerging technology issues, including artificial intelligence (AI), cybersecurity, and software supply chain security, that are currently affecting federal and state procurement. His areas of expertise include the Department of Defense (DOD) and other agency acquisition regulations governing information security and the reporting of cyber incidents, the Cybersecurity Maturity Model Certification (CMMC) program, the requirements for secure software development self-attestations and bills of materials (SBOMs) emanating from the May 2021 Executive Order on Cybersecurity, and the various requirements for responsible AI procurement, safety, and testing currently being implemented under the October 2023 AI Executive Order. 

Bob also represents contractors in False Claims Act (FCA) litigation and investigations involving cybersecurity and other technology compliance issues, as well more traditional government contracting costs, quality, and regulatory compliance issues. These investigations include significant parallel civil/criminal proceedings growing out of the Department of Justice’s Cyber Fraud Initiative. They also include investigations resulting from False Claims Act qui tam lawsuits and other enforcement proceedings. Bob has represented clients in over a dozen FCA qui tam suits.

Bob also regularly counsels clients on government contracting supply chain compliance issues, including those arising under the Buy American Act/Trade Agreements Act and Section 889 of the FY2019 National Defense Authorization Act. In addition, Bob advises government contractors on rules relating to IP, including government patent rights, technical data rights, rights in computer software, and the rules applicable to IP in the acquisition of commercial products, services, and software. He focuses this aspect of his practice on the overlap of these traditional government contracts IP rules with the IP issues associated with the acquisition of AI services and the data needed to train the large learning models on which those services are based. 

Bob is ranked by Chambers USA for his work in government contracts and he writes extensively in the areas of procurement-related AI, cybersecurity, software security, and supply chain regulation. He also teaches a course at Georgetown Law School that focuses on the technology, supply chain, and national security issues associated with energy and climate change.

Photo of Susan B. Cassidy Susan B. Cassidy

Susan is co-chair of the firm’s Aerospace and Defense Industry Group and is a partner in the firm’s Government Contracts and Cybersecurity Practice Groups. She previously served as in-house counsel for two major defense contractors and advises a broad range of government contractors…

Susan is co-chair of the firm’s Aerospace and Defense Industry Group and is a partner in the firm’s Government Contracts and Cybersecurity Practice Groups. She previously served as in-house counsel for two major defense contractors and advises a broad range of government contractors on compliance with FAR and DFARS requirements, with a special expertise in supply chain, cybersecurity and FedRAMP requirements. She has an active investigations practice and advises contractors when faced with cyber incidents involving government information, as well as representing contractors facing allegations of cyber fraud under the False Claims Act. Susan relies on her expertise and experience with the Defense Department and the Intelligence Community to help her clients navigate the complex regulatory intersection of cybersecurity, national security, and government contracts. She is Chambers rated in both Government Contracts and Government Contracts Cybersecurity. In 2023, Chambers USA quoted sources stating that “Susan’s in-house experience coupled with her deep understanding of the regulatory requirements is the perfect balance to navigate legal and commercial matters.”

Her clients range from new entrants into the federal procurement market to well established defense contractors and she provides compliance advices across a broad spectrum of procurement issues. Susan consistently remains at the forefront of legislative and regulatory changes in the procurement area, and in 2018, the National Law Review selected her as a “Go-to Thought Leader” on the topic of Cybersecurity for Government Contractors.

In her work with global, national, and start-up contractors, Susan advises companies on all aspects of government supply chain issues including:

  • Government cybersecurity requirements, including the Cybersecurity Maturity Model Certification (CMMC), DFARS 7012, and NIST SP 800-171 requirements,
  • Evolving sourcing issues such as Section 889, counterfeit part requirements, Section 5949 and limitations on sourcing from China
  • Federal Acquisition Security Council (FASC) regulations and product exclusions,
  • Controlled unclassified information (CUI) obligations, and
  • M&A government cybersecurity due diligence.

Susan has an active internal investigations practice that assists clients when allegations of non-compliance arise with procurement requirements, such as in the following areas:

  • Procurement fraud and FAR mandatory disclosure requirements,
  • Cyber incidents and data spills involving sensitive government information,
  • Allegations of violations of national security requirements, and
  • Compliance with MIL-SPEC requirements, the Qualified Products List, and other sourcing obligations.

In addition to her counseling and investigatory practice, Susan has considerable litigation experience and has represented clients in bid protests, prime-subcontractor disputes, Administrative Procedure Act cases, and product liability litigation before federal courts, state courts, and administrative agencies.

Susan is a former Public Contract Law Procurement Division Co-Chair, former Co-Chair and current Vice-Chair of the ABA PCL Cybersecurity, Privacy and Emerging Technology Committee.

Prior to joining Covington, Susan served as in-house senior counsel at Northrop Grumman Corporation and Motorola Incorporated.

Photo of Ashden Fein Ashden Fein

Ashden Fein is a vice chair of the firm’s global Cybersecurity practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Ashden counsels clients…

Ashden Fein is a vice chair of the firm’s global Cybersecurity practice. He advises clients on cybersecurity and national security matters, including crisis management and incident response, risk management and governance, government and internal investigations, and regulatory compliance.

For cybersecurity matters, Ashden counsels clients on preparing for and responding to cyber-based attacks, assessing security controls and practices for the protection of data and systems, developing and implementing cybersecurity risk management and governance programs, and complying with federal and state regulatory requirements. Ashden frequently supports clients as the lead investigator and crisis manager for global cyber and data security incidents, including data breaches involving personal data, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, extortion and ransomware, and destructive attacks.

Additionally, Ashden assists clients from across industries with leading internal investigations and responding to government inquiries related to the U.S. national security and insider risks. He also advises aerospace, defense, and intelligence contractors on security compliance under U.S. national security laws and regulations including, among others, the National Industrial Security Program (NISPOM), U.S. government cybersecurity regulations, FedRAMP, and requirements related to supply chain security.

Before joining Covington, Ashden served on active duty in the U.S. Army as a Military Intelligence officer and prosecutor specializing in cybercrime and national security investigations and prosecutions — to include serving as the lead trial lawyer in the prosecution of Private Chelsea (Bradley) Manning for the unlawful disclosure of classified information to Wikileaks.

Ashden currently serves as a Judge Advocate in the
U.S. Army Reserve.

Photo of Nooree Lee Nooree Lee

Nooree advises government contractors and financial investors regarding the regulatory aspects of corporate transactions and restructurings. His experience includes preparing businesses for sale, negotiating deal documents, coordinating large-scale diligence processes, and navigating pre- and post-closing regulatory approvals and integration. He has advised on…

Nooree advises government contractors and financial investors regarding the regulatory aspects of corporate transactions and restructurings. His experience includes preparing businesses for sale, negotiating deal documents, coordinating large-scale diligence processes, and navigating pre- and post-closing regulatory approvals and integration. He has advised on 35+ M&A deals involving government contractors totaling over $30 billion in combined value. This includes Veritas Capital’s acquisition of Cubic Corp. for $2.8 billion; the acquisition of Perspecta Inc. by Veritas Capital portfolio company Peraton for $7.1 billion; and Cameco Corporation’s strategic partnership with Brookfield Renewable Partners to acquire Westinghouse Electric Company for $7.8+ billion.

Nooree also counsels clients navigating the Foreign Military Sales (FMS) program and Foreign Military Financing (FMF) arrangements. Nooree has advised both U.S. and ex-U.S. companies in connection with defense sales to numerous foreign defense ministries, including those of Australia, Israel, Singapore, South Korea, and Taiwan.

Over the past several years, Nooree’s practice has expanded to include advising on the intersection of government procurement and artificial intelligence. Nooree counsels clients on the negotiation of AI-focused procurement and non-procurement agreements with the U.S. government and the rollout of procurement regulations and policy stemming from the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

Nooree maintains an active pro bono practice focusing on appeals of denied industrial security clearance applications and public housing and housing discrimination matters. In addition to his work within the firm, Nooree is an active member of the American Bar Association’s Section of Public Contract Law and has served on the Section Council and the Section’s Diversity Committee. He also served as the firm’s Fellow for the Leadership Council on Legal Diversity program in 2023.

Photo of Ryan Burnette Ryan Burnette

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain…

Ryan Burnette is a government contracts and technology-focused lawyer that advises on federal contracting compliance requirements and on government and internal investigations that stem from these obligations. Ryan has particular experience with defense and intelligence contracting, as well as with cybersecurity, supply chain, artificial intelligence, and software development requirements.

Ryan also advises on Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) compliance, public policy matters, agency disputes, and government cost accounting, drawing on his prior experience in providing overall direction for the federal contracting system to offer insight on the practical implications of regulations. He has assisted industry clients with the resolution of complex civil and criminal investigations by the Department of Justice, and he regularly speaks and writes on government contracts, cybersecurity, national security, and emerging technology topics.

Ryan is especially experienced with:

  • Government cybersecurity standards, including the Federal Risk and Authorization Management Program (FedRAMP); DFARS 252.204-7012, DFARS 252.204-7020, and other agency cybersecurity requirements; National Institute of Standards and Technology (NIST) publications, such as NIST SP 800-171; and the Cybersecurity Maturity Model Certification (CMMC) program.
  • Software and artificial intelligence (AI) requirements, including federal secure software development frameworks and software security attestations; software bill of materials requirements; and current and forthcoming AI data disclosure, validation, and configuration requirements, including unique requirements that are applicable to the use of large language models (LLMs) and dual use foundation models.
  • Supply chain requirements, including Section 889 of the FY19 National Defense Authorization Act; restrictions on covered semiconductors and printed circuit boards; Information and Communications Technology and Services (ICTS) restrictions; and federal exclusionary authorities, such as matters relating to the Federal Acquisition Security Council (FASC).
  • Information handling, marking, and dissemination requirements, including those relating to Covered Defense Information (CDI) and Controlled Unclassified Information (CUI).
  • Federal Cost Accounting Standards and FAR Part 31 allocation and reimbursement requirements.

Prior to joining Covington, Ryan served in the Office of Federal Procurement Policy in the Executive Office of the President, where he focused on the development and implementation of government-wide contracting regulations and administrative actions affecting more than $400 billion dollars’ worth of goods and services each year.  While in government, Ryan helped develop several contracting-related Executive Orders, and worked with White House and agency officials on regulatory and policy matters affecting contractor disclosure and agency responsibility determinations, labor and employment issues, IT contracting, commercial item acquisitions, performance contracting, schedule contracting and interagency acquisitions, competition requirements, and suspension and debarment, among others.  Additionally, Ryan was selected to serve on a core team that led reform of security processes affecting federal background investigations for cleared federal employees and contractors in the wake of significant issues affecting the program.  These efforts resulted in the establishment of a semi-autonomous U.S. Government agency to conduct and manage background investigations.

Photo of August Gweon August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients on privacy and competition frameworks and AI regulations, with an increasing focus on U.S. state AI legislative developments and trends related to synthetic content, automated decision-making, and generative AI. He also assists clients in assessing federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.