Overview
Since the release of OpenAI’s ChatGPT in November 2022, concerns about the future of artificial intelligence (AI) have taken the country by storm. Early in 2023, the Executive Branch focused its oversight of AI on the creation of voluntary principles adopted by a select group of leading U.S.-based technology companies. On October 30, this changed when President Joe Biden signed an executive order on the Safe, Secure, and Trustworthy Development and Use of AI (the executive order) (E.O. 14110). The executive order is designed to establish new standards for AI safety and security for leading AI companies while also presenting federal agencies with a path forward to develop more robust AI usage guidelines.
At the same time, members of Congress have begun to introduce legislation creating varying degrees of regulatory frameworks covering the burgeoning AI industry. Last year, the Senate held numerous “AI insight forums” designed to improve senators’ understanding of AI technology and its impacts on the country at large. Congress kicked off this year with the formation of a bipartisan Working Group on Artificial Intelligence within the House Financial Services Committee—an indication that the AI issue continues to gain traction.
This report will provide a high-level overview of federal action surrounding AI regulation, a window into what future actions should be expected, and how such actions will impact various industries of interest.
White House Actions
Blueprint for an AI Bill of Rights
In October 2022, the Biden Administration took its first major AI-related action when the White House released its “Blueprint for an AI Bill of Rights.” This set of nonbinding guidelines represents the Biden Administration’s first attempt at providing a framework for regulatory clarity in the AI technology sector – a sector that some fear could violate privacy rights and displace vulnerable workers if left unregulated.
The guidelines are based on five guiding principles including protecting people from unsafe or ineffective automated systems; preventing discrimination by algorithms; safeguarding people from abusive data practices; informing people that an automated system is being used; and letting users opt out of automated systems. While the guidelines describe avenues by which the recommendations can be integrated into policy, practice, and design, they lack enforcement mechanisms ensuring they will actually take effect. Despite this, some industry leaders have expressed concern that the guidelines could lead to overregulation which will stifle innovation and put U.S. businesses at a disadvantage in the global economy.
Building on the Blueprint for an AI Bill of Rights, in May 2023, the Administration obtained independent commitments from AI developers including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI to participate in a public evaluation of their AI systems. This commitment will allow these models to be evaluated by thousands of community partners and AI experts to see if they follow the previously released Blueprint for an AI Bill of Rights and AI Risk Management Framework (described below). Furthermore, in July 2023, the Administration obtained further commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI that their AI systems follow core principles of safety, security, and trust. To this end, the companies agreed to subject their systems to rigorous testing regimes, checking for the potential the systems could be used to launch biological and cyber-attacks. Furthermore, the companies agreed to provide users with a simple way to tell if audio and visual content is original or if it has been altered/generated by AI technology.
National Artificial Intelligence R&D Strategic Plan
In May 2023, the White House Office of Science and Technology Policy (OSTP) announced two new actions concerning the regulation of AI systems. First, the office released the National AI R&D Strategic Plan, designed to define the major research challenges surrounding AI and to coordinate federal R&D investments. It is also designed to reign in risks the government believes the private sector will be unable to handle on its own. The plan builds upon previous strategic plans previously issued by the office in 2016 and 2019 while outlining nine individual strategies the federal government should pursue. These strategies include:
Make long-term investments in fundamental and responsible AI research.
Develop effective methods for human-AI collaboration.
Understand and address the ethical, legal, and societal implications of AI.
Ensure the safety and security of AI systems.
Develop shared public datasets and environments for AI training and testing.
Measure and evaluate AI systems through standards and benchmarks.
Better understand the national AI R&D workforce needs.
Expand public-private partnerships to accelerate advances in AI.
Establish a principled and coordinated approach to international collaboration in AI research.
The White House hopes these strategies will provide government and industry with clear guidelines to accelerate AI advancements and encourage the technology’s growth and implementation.
OSTP also published a request for information (RFI) in an effort to develop a National AI Strategy that pays attention to recent and projected advances in AI, ensuring the U.S. remains a world leader in AI systems that are transparent, responsible, and uphold democratic ideals. Specifically, the RFI sought comments concerning the topics of (1) protecting rights, safety, and national security; (2) advancing equity and strengthening civil rights; (3) bolstering democracy and civic participation; (4) promoting economic growth and good jobs; and (5) innovating in public services.
Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI
On October 30, 2023, President Biden signed an executive order on the Safe, Secure, and Trustworthy Development and Use of AI (E.O. 14110) . The executive order is designed to establish new standards for AI safety and security for leading AI companies while also presenting federal agencies with a path forward to develop more robust AI usage guidelines. The order reflects an understanding within the White House that more must be done to reign in AI technology beyond promulgating voluntary commitments. The executive order represents the first step in developing concrete guidelines and regulations around the technology, further building upon the 2022 Blueprint for an AI Bill of Rights.
The executive order is structured around eight distinct priorities guiding the Administration’s future actions on AI:
Standards for AI Safety and Security. The executive order recommends the creation of robust testing and evaluation protocols, the results of which could be reported to the federal government. The order also directs the Department of Commerce to develop guidance for content authentication and watermarking to label AI-generated content, cracking down on the proliferation of deceptive materials.
Privacy Protection. The executive order asks Congress to pass legislation protecting privacy rights, also ensuring the use and retention of data by federal agencies is done in a lawful and accountable manner.
Advancing Equity and Civil Rights. The executive order directs federal agencies to provide clear guidance to landlords and federal contractors to ensure AI algorithms are not used to exacerbate housing discrimination. Furthermore, the order directs agencies to develop best practices for the use of AI in the criminal justice system to ensure fairness and root out bias.
Protecting Consumers, Patients, and Students. The executive order directs the Department of Health and Human Services to establish a safety program to receive reports of and to remedy unsafe uses of AI in the health care system while also creating resources to support educators in deploying AI-enabled educational tools.
Supporting Workers. The executive order acknowledges the risk AI poses to the workforce. The order seeks to establish best practices to mitigate these harms and also directs the Department of Labor to produce a report on AI’s potential labor-market impacts and ways federal support for workers can be strengthened.
Promoting Innovation and Competition. The executive order launched the National AI Research Resource – a tool designed to provide researchers and students with access to AI data to promote innovation. The order also calls for providing small AI developers with access to technical assistance and resources, leveling the playing field with large AI companies.
American Leadership on AI. The executive order emphasizes the need for global U.S. leadership on AI to ensure other nations support the safe, secure, and trustworthy deployment and use of AI. The order directs the State Department, in collaboration with the Department of Commerce to establish multistakeholder engagements and international frameworks for harnessing AI’s benefits and managing its risks.
Responsible Government Use of AI. The executive order recognizes the importance of ensuring the government deploys AI in a responsible fashion. To support this goal, the order calls for the creation of specific guidance governing agency use of AI; accelerates the procurement process for AI-driven technologies; and promotes the hiring of AI professionals across the government.
In subsequent sections, we will highlight the impacts this executive order will have on sectors of importance, including financial services, health care, energy, and transportation. Looking forward, the implementation of the executive order will be a lengthy process. Deadlines prescribed in the order stretch from November 2023 to early 2025, meaning relevant agencies should be continuously monitored for further AI-related actions and regulations. Additionally, following the release of the executive order, the White House Office of Management and Budget released a draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of AI. This draft represents the White House’s first attempt at implementing the executive order and will continue to guide agencies as they promulgate policies moving forward.
Congressional Action
Like the White House, Congress has taken a heightened interest in the development of AI technology, and in the regulatory regime that may be created around it. In both the House and Senate, lawmakers have begun holding hearings, introducing legislation, and participating in educational forums designed to improve member understanding of the technology.
In September 2023, Senate Majority Leader Chuck Schumer (D-NY) announced his SAFE Innovation Framework, designed to represent guiding principles that should ground any AI legislative solutions. This framework seeks to safeguard national security with AI and determine how adversaries use it; support the deployment of responsible AI systems to address concerns around misinformation and bias; require that AI systems align with U.S. democratic values; determine what information the federal government needs form AI developers and deployers; and support U.S.-led innovation in AI technologies – including innovation in security, transparency, and accountability.
The 118th Congress has seen a flurry of activity around AI, with lawmakers introducing a variety of bills to tackle this emerging technology. So far, over 30 AI-related bills have been introduced in the House and Senate. Most of these bills follow two distinct regulatory approaches – a “light touch” approach and a “heavy-handed” approach. Descriptions of the most prominent bills are included below.
Light Touch Legislation
Most of the AI legislation introduced this term seems to follow a “light touch” regulatory approach. Some members have expressed concerns that following a “heavy-handed” approach would stifle innovation and put the U.S. at a disadvantage against adversaries abroad.
On November 15, Senator John Thune (R-SD) and Senator Amy Klobuchar introduced the AI Research, Innovation, and Accountability Act of 2023 (S. 3312). The bill establishes a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI. Agency applications of “high-impact” AI systems would be governed by NIST, with enforcement being conducted by OMB. Furthermore, companies deploying “critical-impact” AI systems would have to perform risk assessments in line with those established in the NIST AI Risk Management Framework.
On November 2, Senator Jerry Moran (R-KS) and Senator Mark Warner (D-VA) introduced the Federal AI Risk Management Act (S. 3205). The bill would require OMB to issue guidance requiring agencies to incorporate the NSIT AI Risk Management Framework into their AI risk management efforts; require OMB to establish a workforce initiative that enables federal agencies to access diverse candidates; provide oversight into the federal procurement of AI systems; and require NIST to develop test and evaluation capabilities for AI acquisitions.
Heavy-Handed Legislation
Other legislative proposals seek to take a more robust approach to regulating the use and deployment of AI. Some members, such as Senator Ted Cruz (R-TX) and Senator John Thune (R-SD) believe proposals eventually spearheaded by Senate Majority Leader Chuck Schumer (D-NY) will ultimately follow this approach. Proponents of the “heavy-handed” regulatory approach believe the risks of AI technologies are too great not to set clear standards and guidance around its development, use, and procurement.
On October 26, Senator Brian Schatz (D-HI) and Senator John Kennedy (R-LA) introduced the AI Labeling Act of 2023 (S. 2691). The bill would provide increased transparency around AI-generated content by requiring clear labels and disclosures on AI-generated content and chatbots. The bill also calls on developers and third-party licensees to “implement reasonable procedures to prevent downstream use” of those systems without proper disclosures in place. The bill’s sponsors say it “puts the onus” on companies rather than consumers to identify content as being generated by AI. This bill is just one example of the many legislative solutions introduced to combat AI-driven scams, deepfakes, and other misleading content.
On September 8, Senator Richard Blumenthal (D-CT) and Senator Josh Hawley (R-MO) announced a bipartisan legislative framework to establish guardrails for AI. The framework proposes specific solutions such as the establishment of an independent oversight body, ensuring accountability for AI-driven harms, defending national security, promoting transparency, and protecting consumers and children. Notably, the framework would establish a licensing regime for companies developing sophisticated general-purpose AI models or models used in high-risk situations. This framework has not been incorporated into a specific piece of legislation.
AI Insight Forums
In announcing his SAFE Innovation Framework, Senate Majority Leader Chuck Schumer (D-NY) stated that the “traditional approach of committee hearings” is not sufficient to address the risks and opportunities presented by AI technology. Instead, he planned a series of AI “Insight Forums,” covering a variety of policy topics. These panels would bring together members of Congress and leaders in the tech industry, trade associations, civil society, academia, labor unions, the art industry, think tanks, and government to present diverse visions of how AI technology should be addressed (if at all) on the federal level.
Between September 13 and November 29, seven Insight Forums have been held covering the following topics:
Forum #1 – Introduction: Focused on elections and the use of deepfakes; high-risk applications; impact of AI on the workforce; national security; and privacy.
Forum #2 – Innovation: Focused on transformational innovation in medicine, energy, and science; sustainable innovation in security, accountability, and transparency; equitable government R&D funding; balancing open source AI models’ national security concerns while recognizing the benefit they could bring to American competitiveness and innovation; availability of government datasets; and minimizing harms.
Forum #3 – Workforce: Focused on how AI will alter the way Americans work; risks and opportunities presented by AI in medicine, manufacturing, energy, and other industries.
Forum #4 – High Impact AI: Focused on civil rights and AI discrimination laws; facial recognition technology; accuracy of AI tools; identifying risks and risk frameworks; auditing AI systems; critical infrastructure; AI used in hiring and employment; environmental risks; housing and financing; and AI in health care and medicine.
Forum #5 – Democracy and Elections: Focused on the Biden Administration Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI; obligations for watermarking and AI-generated content; positive use cases for AI’s role in elections and civic engagement; and mandating Ai risk assessments.
Forum #6 – Privacy and Liability: Focused on open source and proprietary AI models; data privacy legislation (AI vs. general privacy); establishing AI risk tiers; Section 230 liability; and standards and apportioning of liability.
Forum #7 – Transparency & Explainability and Intellectual Property & Copyright: Focused on the definition of AI transparency and what AI could mean for creators and inventors, particularly with regard to intellectual property and copyright.
Forum #8 – Risk, Alignment, & Guarding Against Doomsday Scenarios: Focused on AI risks and AI risk mitigation. Notably included a discussion on the probability AI will lead to a “doomsday” scenario.
Forum #9 – National Security: Focused on the risk posed by China and whether the government should increase funding and procurement for its AI military capabilities.
Industry Impacts
Elected officials have acknowledged all industries are sure to be impacted by AI technology. In line with this acknowledgment, the White House and various executive agencies have already begun to take action, releasing guidance on industry-specific uses of AI.
Health Care
The executive order on the Safe, Secure, and Trustworthy Development and Use of AI includes multiple provisions tailored specifically towards the health care industry. Specifically, the executive order directs the Secretary of the Department of Health and Human Services (HHS) to release a strategy within one year on how it plans to regulate the use of AI or AI-enabled tools in the drug development process. At minimum, the strategy must define the objectives, goals, and high-level principles for appropriate regulation in each phase of drug development; identify areas where future rulemaking may be necessary; consider the potential for new public-private partnerships needed for a regulatory system; and assess the possible risks involved in utilizing AI technology. The report comes as many stakeholders have raised concern HHS and the Food and Drug Administration (FDA) will not be prepared for the increased pace of drug development and discovery resulting from the use of AI technology.
Relatedly, the executive order gives HHS one year to establish an “AI Taskforce” and strategic plan on the use and deployment of predictive and generative AI-enabled technology in health care delivery. Moreover, the taskforce must ensure that AI-powered health care delivery tools are not biased and maintain a minimum level of quality across areas such as research and discovery; drug and device safety; and health care financing.
Additionally, the executive order directs HHS to identify and prioritize grantmaking and other awards to support responsible AI development and use, including the creation of novel personalized patient immune-response profiles; exploring ways to improve health care-data quality; and advancing the development of AI systems that improve the quality of veterans’ health care, among other goals. Furthermore, the executive order seeks to accelerate the award of grants through the National Institutes of Health AI/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD), showcasing AIM-AHEAD activities in underserved communities.
The FDA is aware of the potential for AI and machine learning (ML) to be used in the drug development process. In May, the agency released an initial discussion paper to converse with and solicit views from stakeholders around the use of these technologies for drug development purposes. The FDA has said it plans “to develop and adopt a flexible risk-based regulatory framework that promotes innovation and protects patient safety.”
Energy
The executive order includes multiple provisions designed to help the Department of Energy (DOE) combat climate change and promote grid resiliency. The executive order directs DOE, in coordination with OSTP, the Federal Energy Regulatory Commission, the Council on Environmental Quality, and the National Climate Advisor, to issue a report detailing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power. DOE is further directed to develop climate change mitigation strategies in consultation with industry, academia, and international allies through the use of AI testbeds and supercomputing capabilities. It also creates a new office to coordinate the development of AI and other critical and emerging technologies across DOE programs and the 17 National Laboratories.
Transportation
The executive order calls for multiple Department of Transportation (DOT) reports on the future of AI in transportation. Specifically, the executive order directs DOT, in partnership with the Advanced Research Projects Agency – Infrastructure (ARPA-I) to explore transportation-related applications of AI such as autonomous mobility ecosystems. It also encourages ARPA-I to prioritize the allocation of grants to support these opportunities. The executive order also includes a requirement that the Nontraditional and Emerging Transportation Technology Council assess the need for guidance and technical assistance regarding the use of AI in transportation, and establish a new DOT Cross-Model Executive Working Group.
The executive order also directs certain Federal Advisory Committees, including the Advanced Aviation Advisory Committee, the Transforming Transportation Advisory Committee, and the Intelligent Transportation Systems Program Advisory Committee to provide advice on the safe and responsible use of AI in transportation. These advisory committees are traditionally tasked with developing guidance regarding both air and surface transportation systems throughout the U.S.
Financial Services
The executive order contains few provisions directed to the Department of Treasury, however, those that are included focus on improving cybersecurity throughout the Department and the financial services sector as a whole. More specifically, the executive order requires the Secretary of Treasury to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks. Relatedly, the order requires any agencies with authority over critical infrastructure – including Treasury – to provide an assessment of risks related to using AI in the banking sector and how deploying AI may make critical infrastructure systems more vulnerable to failures, physical attacks and cyberattacks. It also requires Treasury to consider ways to mitigate these vulnerabilities.
Most government focus on AI in the financial services sector has taken place at the Securities and Exchange Commission (SEC). In August, the SEC released a proposed rule known as the Predictive Data Analytics Rule (RegPDA). The rule would require broker-dealers and investment advisors to evaluate any “use or reasonably foreseeable potential use” of a covered technology in an investor interaction to identify any conflict of interest that may occur. Further, the rule would require firms to adopt policies and procedures that “neutralize” those conflicts.
The rule has faced pushback from the broker-dealer and investment advisor communities due to the rule’s broad definition of a “covered technology,” as well as uncertainty around how one could “neutralize” a conflict. Under the rule, covered technologies include not just AI, but any “analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or processes that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes.” This broad definition has led to some expressing concern that the rule could cover the use of simple technologies such as Excel spreadsheets. Furthermore, unlike previous SEC rules involving conflicts of interest, this rule has a novel “neutralization” provision. Typically, the SEC requires firms to disclose or mitigate conflicts – naturally leading to robust recordkeeping and investor protection.
SEC Chair Gary Gensler has said the rule is necessary, as predictive data analytics models “provide an increasing ability to make predictions about each of us as individuals… [raising] possibilities that conflicts may arise to the extent that advisors or brokers are optimizing to place their interests ahead of their investors’ interests.” Chair Gensler also expressed concern around the possibility for these technologies to be rapidly scaled up, resulting in conflicts that, “could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible.”
On January 11, House Financial Services Committee Chairman Patrick McHenry (NC-10) and Ranking Member Maxine Waters (CA-43) announced the creation of the Committee’s bipartisan Working Group on Artificial Intelligence (AI).
The bipartisan AI Working Group, led by Digital Asset, Financial Technology and Inclusion Subcommittee Chairman French Hill (AR-02) and Subcommittee Ranking Member Stephen F. Lynch (MA-08), “will explore how artificial intelligence (AI) is impacting the financial services and housing industries, including firms’ use of AI in decision-making, the development of new products and services, fraud prevention, compliance efficiency, and the enhancement of supervisory and regulatory tools, as well as how AI may impact the financial services workforce.” Notably, the Working Group will examine how current regulation addresses the use of AI and ensure new regulations consider the potential benefits and risks associated with AI.
Next Steps
Moving forward, Congress will continue to examine the contours of AI safety and regulatory legislation into the new year. Additionally, congressional committees in the House and Senate will likely continue to expand on their AI research and education activities by holding hearings and further AI Insight Forums.
Furthermore, the White House and executive agencies will continue to implement parts of the executive order, previously released in late October. Some deadlines assigned in the executive order have passed, having been due in November. Others, however, will not be due until early 2025. Agencies have flexibility in meeting these deadlines as unlike legislation passed by Congress, the executive order does not include penalties for falling behind.
Internationally, the European Union (EU) continues to make progress on its long-awaited AI Act. In December, The European Commission and the European Parliament reached an agreement on the legislation, representing the most comprehensive AI regulatory regime in the world. The law, which does not take effect until 2025, sets rules around the development of high-impact general-purpose AI models, establishes a revised system of governance with enforcement powers at the EU level, extends a list of prohibited uses of AI – such as the use of AI facial recognition technology by law enforcement, – and requires AI deployers to conduct a fundamental rights impact assessment prior to launching an AI system.
Progress made by the Europeans will certainly put pressure on Congress to act in 2024, setting up a crucial year for congressional leaders who believe the U.S. should be a global leader in AI policy and regulatory standards.