Timothy M. Persons, GAO Chief Scientist Applied Research and Methods, recently provided testimony on artificial intelligence (“AI”) before the House of Representatives’ Subcommittees on Research and Technology and Energy, Committee on Science, Space, and Technology.  Specifically, his testimony summarized a prior GAO technological assessment on AI from March 2018.  Persons’ statement addressed three areas:  (1) AI has evolved over time; (2) the opportunities and future promise of AI, as well as its principal challenges and risks; and (3) the policy implications and research priorities resulting from advances in AI.  This statement by a GAO official is instructive for how the government is thinking about the future of AI, and how government contractors can, too.

The Evolution and Characteristics of AI

Persons stated that AI can be defined as either “narrow,” meaning “applications that provide domain-specific expertise or task completion,” or “general,” meaning an “application that exhibits intelligence comparable to a human, or beyond.”  Although AI has evolved since the 1950s, Persons cited today’s “increased data availability, storage, and processing power” as explanations for why AI occupies such a central role in today’s discourse.  And while we see many instances of narrow AI, general AI is still in its formative stages.

Persons described “three waves” of AI.  The first wave is characterized by “expert knowledge or criteria developed in law or other authoritative sources and encoded into a computer algorithm,” such as tax preparation services.  The second wave is characterized by machine learning and perception, and includes many technologies recognizable today such as voice-activated digital assistants and self-driving cars.  The third wave is characterized by “the strengths of first- and second-wave AI . . . capable of contextual sophistication, abstraction, and explanation”; an example cited in his testimony was a ship navigating the seas without human intervention.  This third wave is just in its beginning stages.

Benefits of Artificial Intelligence and Challenges to Its Development

In his testimony, Persons summarized a number of benefits from the increased prevalence of AI, including “improved economic outcomes and increased levels of productivity” for workers and companies, “improved or augmented human decision making” through AI’s faster processing of greater quantities of data, and even providing “insights into complex and pressing problems.”  However, a number of challenges to further developing AI technology, such as the “barriers to collecting and sharing data” that researchers and manufacturers face, the “lack of access to adequate computing resources and requisite human capital” for AI researchers, the inadequacy of current laws and regulations to address AI, and the need for an “ethical framework for and explainability and acceptance of AI.”

In its report, GAO identified “four high-consequence sectors” for the further development of AI:  cybersecurity, automated vehicles, criminal justice, and financial services.  In each of these sectors, AI may be used as a valuable tool that could enhance that specific industry’s capabilities, but AI also presents concerns in that given industry, such as to safety, fairness, and civil rights, among other areas.

Policy Considerations to AI and Areas Requiring More Research

Relying on the GAO report and the views of subject-matter experts, Persons’ testimony highlights a number of policy considerations and areas that require more research to improve AI.  One area is how to “incentiviz[e] data sharing.”  Persons highlighted that private actors need to better share data while still finding ways to safeguard intellectual property and proprietary information.  Similarly, federal agencies can share data that would otherwise not be accessible to researchers.  Another area was “improving safety and security,” as the costs from cybersecurity breaches are not necessarily borne equally between manufacturers and users.

One of the more significant policy considerations that will accompany increased usage of AI is “updating the regulatory approach.”  As an example, “the manufacturer of the automated vehicle bears all responsibility for crashes” under the regulatory structure as currently formulated.  Persons noted that regulators may need “to be proactive” in areas like this to “improve overall public safety.”  Relatedly, laws may have to adapt or evolve to allocate liability more appropriately, as “humans may not always be behind decisions that are made by automated systems.”  Without appropriate regulatory guidance, who bears responsibility for problems caused by AI remains unclear.  There is also a possibility for “establishing regulatory sandboxes,” which would enable regulators “to begin experimenting on a small scale and empirically test[] new ideas.”

Finally, Persons highlighted the importance of understanding “AI’s effects on employment and reimagining training and education.”  The data on this subject is currently incomplete, but Persons stated that it is believed job losses and gains will be sector specific.  With the increased prevalence of AI will also come the need to “reevaluate and reimagine training and education” to offset any possible job losses.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Susan B. Cassidy Susan B. Cassidy

Susan is co-chair of the firm’s Aerospace and Defense Industry Group and is a partner in the firm’s Government Contracts and Cybersecurity Practice Groups. She previously served as in-house counsel for two major defense contractors and advises a broad range of government contractors…

Susan is co-chair of the firm’s Aerospace and Defense Industry Group and is a partner in the firm’s Government Contracts and Cybersecurity Practice Groups. She previously served as in-house counsel for two major defense contractors and advises a broad range of government contractors on compliance with FAR and DFARS requirements, with a special expertise in supply chain, cybersecurity and FedRAMP requirements. She has an active investigations practice and advises contractors when faced with cyber incidents involving government information, as well as representing contractors facing allegations of cyber fraud under the False Claims Act. Susan relies on her expertise and experience with the Defense Department and the Intelligence Community to help her clients navigate the complex regulatory intersection of cybersecurity, national security, and government contracts. She is Chambers rated in both Government Contracts and Government Contracts Cybersecurity. In 2023, Chambers USA quoted sources stating that “Susan’s in-house experience coupled with her deep understanding of the regulatory requirements is the perfect balance to navigate legal and commercial matters.”

Her clients range from new entrants into the federal procurement market to well established defense contractors and she provides compliance advices across a broad spectrum of procurement issues. Susan consistently remains at the forefront of legislative and regulatory changes in the procurement area, and in 2018, the National Law Review selected her as a “Go-to Thought Leader” on the topic of Cybersecurity for Government Contractors.

In her work with global, national, and start-up contractors, Susan advises companies on all aspects of government supply chain issues including:

  • Government cybersecurity requirements, including the Cybersecurity Maturity Model Certification (CMMC), DFARS 7012, and NIST SP 800-171 requirements,
  • Evolving sourcing issues such as Section 889, counterfeit part requirements, Section 5949 and limitations on sourcing from China
  • Federal Acquisition Security Council (FASC) regulations and product exclusions,
  • Controlled unclassified information (CUI) obligations, and
  • M&A government cybersecurity due diligence.

Susan has an active internal investigations practice that assists clients when allegations of non-compliance arise with procurement requirements, such as in the following areas:

  • Procurement fraud and FAR mandatory disclosure requirements,
  • Cyber incidents and data spills involving sensitive government information,
  • Allegations of violations of national security requirements, and
  • Compliance with MIL-SPEC requirements, the Qualified Products List, and other sourcing obligations.

In addition to her counseling and investigatory practice, Susan has considerable litigation experience and has represented clients in bid protests, prime-subcontractor disputes, Administrative Procedure Act cases, and product liability litigation before federal courts, state courts, and administrative agencies.

Susan is a former Public Contract Law Procurement Division Co-Chair, former Co-Chair and current Vice-Chair of the ABA PCL Cybersecurity, Privacy and Emerging Technology Committee.

Prior to joining Covington, Susan served as in-house senior counsel at Northrop Grumman Corporation and Motorola Incorporated.