HOME/JURISDICTIONS/UNITED STATES

US — Federal & State

United States of America

The US has no single federal AI law. Governance is fragmented across executive orders, agency rules, and a patchwork of state legislation. Click a state to explore its specific regulations.

887FEDERAL
8STATES W/ DATA
Autonomous VehiclesCybersecurityData Privacy & ProtectionDefense & National SecurityEducationFacial RecognitionFinanceGenerative AIHealthcareJudicial & Law EnforcementLabor & WorkforceLiability & AccountabilityNational Strategy

State Legislation Map — Click a State

No data
Most active

Federal Regulations & Executive Actions

Executive Order✓ Official

Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

President Biden's landmark executive order directed federal agencies to develop standards, tools, and tests to ensure AI safety and security, requiring developers of the most powerful AI systems to share safety test results with the government. It established new standards for AI safety and security, privacy protections, equity and civil rights safeguards, and actions to support workers, consumers, and patients. The order also directed NIST to develop standards for AI red-teaming and tasked agencies across government with addressing AI risks in their sectors.

30 October 2023National StrategyCybersecurityData Privacy & Protection+2
↗ Link availableFull text
Executive Order✓ Official

Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence

President Trump's executive order revoked Executive Order 14110 and directed federal agencies to remove regulations perceived as barriers to American AI innovation and dominance. It instructed the Office of Science and Technology Policy (OSTP) and the National Security Advisor to develop an action plan for sustaining U.S. AI leadership within 180 days. The order emphasized deregulation, private-sector-led AI development, and maintaining American competitiveness against foreign adversaries.

23 January 2025National StrategyDefense & National Security
↗ Link availableFull text
Law / Act✓ Official

National Artificial Intelligence Initiative Act of 2020

Enacted as part of the National Defense Authorization Act for Fiscal Year 2021, this law established the National Artificial Intelligence Initiative to ensure continued U.S. leadership in AI research and development, coordinate AI R&D across federal agencies, and accelerate the development and use of trustworthy AI. It created the National AI Initiative Office within OSTP and established an interagency committee to coordinate federal AI activities. The law also directed the creation of the National AI Advisory Committee to advise the President and the Initiative.

1 January 2021National StrategyEducation
↗ Link availableFull text
Law / Act✓ Official

AI in Government Act of 2020

Enacted as part of the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, this law directed the General Services Administration to establish an AI Center of Excellence to assist agencies in acquiring and using AI technologies. It also required the Office of Management and Budget to issue guidance on the use of AI in federal government and directed agencies to submit annual reports on AI use. The law aimed to improve the federal government's adoption of AI while ensuring ethical and responsible use.

1 January 2021National Strategy
↗ Link availableFull text
Standard / Framework✓ Official

Artificial Intelligence Risk Management Framework (AI RMF 1.0)

Published by the National Institute of Standards and Technology (NIST), the AI RMF 1.0 is a voluntary framework to help organizations manage risks associated with AI systems throughout their lifecycle. The framework is organized around four core functions—Govern, Map, Measure, and Manage—and provides guidance for improving the ability of organizations to incorporate trustworthiness considerations into AI design, development, and deployment. It is intended to be used by all organizations regardless of size, sector, or AI maturity and is technology- and sector-neutral.

26 January 2023National StrategyCybersecurityData Privacy & Protection
↗ Link availableFull text
Policy / Guidance✓ Official

OMB Memorandum M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence

This Office of Management and Budget memorandum established minimum risk management practices for federal agencies using AI, particularly in contexts that affect the rights or safety of the public. It required agencies to designate a Chief AI Officer, complete an annual inventory of AI use cases, and conduct impact assessments for high-impact AI applications. Agencies were directed to take concrete steps to protect civil rights and civil liberties and to ensure human oversight of high-risk AI systems.

28 March 2024National StrategyData Privacy & ProtectionCybersecurity
↗ Link available
National Strategy✓ Official

National Artificial Intelligence Research and Development Strategic Plan: 2019 Update

This strategic plan, developed by the Networking and Information Technology Research and Development (NITRD) program, updated the 2016 National AI R&D Strategic Plan and identified federal priority areas for AI research investment. It outlined eight strategic priorities including long-term AI research investment, human-AI collaboration, ethics and fairness, safety and security, shared public datasets, standards, and national AI research infrastructure. The 2019 update added an eighth strategy on expanding public-private partnerships to accelerate AI development.

9 May 2019National StrategyCybersecurity
↗ Link available
National Strategy✓ Official

National Artificial Intelligence Research and Development Strategic Plan: 2023 Update

The 2023 update to the National AI R&D Strategic Plan maintained the eight original strategic priorities while adding a ninth priority focused on establishing a principled and coordinated approach to international collaboration in AI research and development. The plan emphasized the importance of international engagement to advance U.S. AI leadership, promote democratic values in global AI development, and address the international dimensions of AI safety, security, and trustworthiness. It reflected lessons learned from the rapid advancement of large language models and generative AI technologies.

24 May 2023National StrategyGenerative AI
↗ Link available
Executive Order✓ Official

Executive Order 14148: Initial Rescissions of Harmful Executive Orders and Actions

This executive order, signed on January 20, 2025, formally revoked Executive Order 14110 on Safe, Secure, and Trustworthy AI among numerous other Biden-era executive actions. It marked the official rollback of the previous administration's comprehensive AI governance framework. The revocation directed agencies to stop implementing requirements stemming from EO 14110 and to review existing AI-related regulations and guidance for potential revision.

13 February 2025National Strategy
↗ Link availableFull text
Executive Order✓ Official

Executive Order on Advancing Artificial Intelligence Education for American Youth

This executive order directed federal agencies to prioritize AI literacy and education for K-12 students, instructing the Department of Education to develop AI education initiatives and resources. It called for the creation of a Presidential AI Challenge to inspire students to develop AI skills and called on the private sector to contribute resources and expertise to AI education. The order aimed to prepare the next generation of American workers and innovators for an AI-driven economy.

9 April 2025EducationNational StrategyLabor & Workforce
↗ Link availableFull text
Policy / Guidance✓ Official

OMB Memorandum M-25-21: Accelerating Federal Use of AI through Streamlined Governance

This OMB memorandum, issued under the Trump administration, replaced M-24-10 and directed federal agencies to streamline AI governance processes to accelerate the adoption of AI in government operations. It reduced some risk management requirements from the prior memorandum while retaining requirements for agency Chief AI Officers and high-impact AI use case inventories. The memorandum emphasized removing bureaucratic barriers to AI deployment while maintaining accountability for high-stakes AI applications.

23 April 2025National Strategy
↗ Link available
Policy / Guidance✓ Official

OMB Memorandum M-25-22: Driving Efficient Acquisition of Artificial Intelligence in Government

This OMB memorandum established guidance for federal agencies on procuring AI technologies efficiently, emphasizing competition, flexibility, and innovation in government AI procurement. It directed agencies to avoid vendor lock-in, evaluate AI solutions based on performance and total cost of ownership, and leverage commercial AI capabilities where appropriate. The memorandum also addressed data rights, security requirements, and interoperability standards for procured AI systems.

23 April 2025National StrategyCybersecurity
↗ Link available
Executive Order✓ Official

Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government

Issued by President Trump during his first term, this executive order established principles for the use of AI by federal agencies, requiring agencies to adhere to nine principles including lawfulness, transparency, accuracy, and protection of civil liberties. It directed agencies to inventory their AI use cases and designated the CIO Council as responsible for AI governance coordination across the executive branch. The order was notable for being one of the first formal federal AI governance frameworks applied specifically to government operations.

9 December 2020National StrategyData Privacy & Protection
↗ Link availableFull text
Executive Order✓ Official

Executive Order 13859: Maintaining American Leadership in Artificial Intelligence

President Trump's first AI executive order launched the American AI Initiative, directing federal agencies to prioritize AI research and development investments and instructing agencies to make federal data and computing resources available to the AI research community. It called on the National Institute of Standards and Technology to develop standards to support AI technology development, and directed agencies to train and recruit federal workers with AI expertise. The order established the United States' first whole-of-government AI strategy.

11 February 2019National StrategyLabor & Workforce
↗ Link availableFull text
Policy / Guidance✓ Official

Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People

Published by the White House Office of Science and Technology Policy (OSTP), this non-binding framework outlined five principles to guide the design, use, and deployment of automated systems to protect Americans' rights and opportunities. The five principles are: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives and fallback mechanisms. While not legally enforceable, the Blueprint provided a normative framework that influenced subsequent federal AI policy and state legislation.

3 March 2022National StrategyData Privacy & ProtectionFacial Recognition
↗ Link available
Policy / Guidance✓ Official

Department of Defense Directive 3000.09: Autonomy in Weapon Systems

This DoD directive updated the 2012 policy on autonomous weapon systems, establishing requirements for human judgment in the use of force and safeguards to prevent unintended engagement. It required that autonomous and semi-autonomous weapon systems be designed to allow commanders and operators to apply appropriate levels of human judgment over the use of force. The directive mandated senior-level reviews for the development and fielding of autonomous weapon systems and set safety, security, and effectiveness standards for AI-enabled weapons.

9 January 2024Defense & National Security
↗ Link available
National Strategy✓ Official

Department of Defense Data, Analytics, and Artificial Intelligence Adoption Strategy

The DoD's strategy for accelerating AI adoption across military and administrative operations, emphasizing the integration of data, analytics, and AI as a strategic priority to maintain military advantage. The strategy outlined four lines of effort: establishing data as a strategic asset, maturing DoD's data, analytics, and AI infrastructure, implementing responsible AI practices, and developing the workforce needed to leverage these capabilities. It also called for streamlining the acquisition of AI-enabled capabilities and fostering a culture of data-driven decision-making across the department.

1 October 2023Defense & National SecurityNational Strategy
↗ Link available
Law / Act✓ Official

Artificial Intelligence Act of 2024 (NDAA FY2025 AI Provisions)

Introduced in the 118th Congress, this legislation sought to codify AI governance frameworks for federal agencies and establish national AI standards. Several AI-related provisions were incorporated into the National Defense Authorization Act for Fiscal Year 2025, including requirements for the DoD to assess AI risks, develop AI talent pipelines, and report on the use of AI in defense acquisition and operations. The provisions reflected growing Congressional interest in formalizing AI oversight structures across the federal government.

27 June 2024Defense & National SecurityNational Strategy
↗ Link availableFull text
Law / Act✓ Official

AI Transparency in Government Procurement Act (NDAA FY2024, Section 1508)

Enacted as part of the National Defense Authorization Act for Fiscal Year 2024, this provision required the Department of Defense to assess and disclose its use of AI in acquisition and procurement processes and to develop guidance on AI procurement standards and evaluation criteria. It also mandated reporting to Congress on AI-enabled capabilities being developed and deployed by the DoD, enhancing legislative oversight of military AI adoption. The provision built on prior NDAA AI sections and reflected growing Congressional focus on DoD AI governance.

22 December 2023Defense & National Security
↗ Link availableFull text
Policy / Guidance✓ Official

National Security Memorandum NSM-16: Memorandum on Advancing the United States' Leadership in Artificial Intelligence

This National Security Memorandum directed the Intelligence Community and national security agencies to harness AI capabilities while managing risks to national security and human rights. It established requirements for agencies to develop responsible AI governance frameworks, protect against foreign adversaries' AI capabilities, and ensure that U.S. AI development upholds democratic values. The memorandum also directed the Director of National Intelligence to produce annual assessments of foreign AI threats and capabilities.

24 October 2024Defense & National SecurityNational StrategyCybersecurity
↗ Link available
Standard / Framework✓ Official

NIST Special Publication 600-1: AI Risk Management Framework: Generative AI Profile

NIST's Generative AI Profile is a companion resource to the AI RMF 1.0 that provides specific guidance for managing unique risks associated with generative AI technologies, including large language models. It identifies 12 risk areas specific to generative AI—such as hallucinations, harmful content, data privacy, and intellectual property—and provides suggested actions for AI developers and deployers. The profile was developed through extensive public consultation and is intended to help organizations evaluate and manage the distinctive challenges posed by foundation models and generative AI systems.

29 June 2023Generative AICybersecurityData Privacy & Protection
↗ Link availableFull text
Policy / Guidance✓ Official

Voluntary AI Commitments from Leading AI Companies

The Biden-Harris administration secured voluntary commitments from seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—to manage AI risks and advance safety. The companies pledged to share information on AI safety with governments, academia, and civil society; invest in cybersecurity and insider threat safeguards; and develop technical mechanisms to ensure users know when content is AI-generated. These voluntary commitments were an interim step ahead of formal legislation and laid groundwork for the subsequent October 2023 executive order.

21 July 2023National StrategyGenerative AICybersecurity
↗ Link available
Policy / Guidance✓ Official

Federal Trade Commission Policy Statement on Biometric Information and AI

The FTC issued a policy statement warning that commercial use of biometric information, including facial recognition and other AI-driven biometric technologies, that deceives or harms consumers may violate Section 5 of the FTC Act. The statement outlined the FTC's intent to scrutinize companies that use biometric data in ways that cause harm, including unauthorized collection and use for AI training, discriminatory AI applications, and failure to provide consumers adequate notice and choice. It signaled that the FTC would use its existing consumer protection authority to address AI biometric harms in the absence of comprehensive federal biometric privacy legislation.

25 June 2024Facial RecognitionData Privacy & Protection
↗ Link available
Policy / Guidance✓ Official

FTC Report: Generative AI and the FTC's Section 5 Authority

The Federal Trade Commission published a report examining how its existing authority under Section 5 of the FTC Act—prohibiting unfair or deceptive acts or practices—applies to generative AI products and services. The report identified key risk areas including false claims about AI capabilities, deceptive AI-generated content, privacy violations in AI training, and discriminatory AI outcomes. It warned that the FTC would pursue enforcement actions against companies that engage in unfair or deceptive practices related to generative AI, regardless of whether Congress enacted sector-specific AI legislation.

22 August 2023Generative AIData Privacy & Protection
↗ Link available
Policy / Guidance✓ Official

FDA Artificial Intelligence Action Plan

The FDA published an action plan for regulating AI and machine learning-based software as a medical device (AI/ML-SaMD), outlining a risk-based, adaptive regulatory framework tailored to AI systems that continuously learn and change after deployment. The plan described the FDA's intentions for updating its existing regulatory framework for software as a medical device to accommodate the unique characteristics of AI, including criteria for predetermined change control plans and performance monitoring requirements. It signaled the FDA's intent to regulate AI medical devices based on their intended use and risk level rather than their underlying technology.

10 May 2024Healthcare
↗ Link available
Policy / Guidance✓ Official

HHS Office for Civil Rights Guidance on AI and Nondiscrimination in Healthcare

The HHS Office for Civil Rights issued guidance clarifying that existing civil rights laws—including Section 504 of the Rehabilitation Act, Section 1557 of the Affordable Care Act, and Title VI of the Civil Rights Act—apply to healthcare providers' use of AI, and that AI systems that result in discrimination against patients based on race, disability, or other protected characteristics may constitute unlawful discrimination. The guidance directed covered entities to monitor AI tools for discriminatory outcomes and take corrective action. It also addressed how informed consent obligations apply to AI-assisted clinical decision-making.

6 December 2023HealthcareData Privacy & Protection
↗ Link available
Policy / Guidance✓ Official

Equal Employment Opportunity Commission (EEOC) Guidance on AI and Employment Discrimination

The EEOC issued guidance clarifying that employers' use of AI and algorithmic decision-making tools in employment decisions—including hiring, promotion, and termination—must comply with Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act. The guidance addressed how disparate impact analysis applies to AI-driven employment tools and warned that an employer may be liable for discrimination even if the bias is embedded in a third-party AI system used at the employer's direction. It also clarified that the duty to provide reasonable accommodations may extend to modifications in AI screening processes for applicants with disabilities.

30 October 2024Labor & WorkforceData Privacy & Protection
↗ Link available
National Strategy✓ Official

National Cybersecurity Strategy Implementation Plan 2.0 — AI Security Provisions

The second version of the National Cybersecurity Strategy Implementation Plan included provisions specifically addressing AI security, including initiatives to develop AI security guidelines for critical infrastructure, establish AI vulnerability disclosure processes, and incorporate AI into national cyber defense capabilities. It tasked CISA, NSA, and NIST with developing sector-specific AI cybersecurity guidance and directed agencies to assess and address AI supply chain security risks. The plan also included measures to combat malicious use of AI for cyber attacks and disinformation.

17 January 2025CybersecurityNational StrategyDefense & National Security
↗ Link available
National Strategy✓ Official

CISA Roadmap for Artificial Intelligence

The Cybersecurity and Infrastructure Security Agency published its Roadmap for AI, outlining how CISA would promote the responsible use of AI in critical infrastructure defense while addressing risks from adversarial AI. The roadmap identified five lines of effort: responsibly using AI to support CISA's mission, assuring AI is used safely in critical infrastructure, partnering to promote AI security across the broader ecosystem, expanding AI expertise within the agency, and addressing risks from AI to critical infrastructure. It served as a foundational document for CISA's growing role in AI security governance.

15 August 2023CybersecurityDefense & National Security
↗ Link available
Policy / Guidance✓ Official

Department of Education AI Report: Artificial Intelligence and the Future of Teaching and Learning

The U.S. Department of Education published a comprehensive report examining the implications of AI for education, including recommendations for educators, institutions, and policymakers on the responsible use of AI in teaching and learning. The report emphasized the need for a human-centered approach to AI in education, with teachers maintaining agency over AI tools, and called for AI systems in education to be transparent, auditable, and aligned with educational equity goals. It included policy recommendations covering AI literacy, equitable access to AI tools, data privacy protections for students, and guidelines for AI use in assessment and credentialing.

2 April 2024EducationData Privacy & Protection
↗ Link available
Policy / Guidance✓ Official

Consumer Financial Protection Bureau (CFPB) Circular on AI Chatbots in Financial Services

The CFPB issued a circular warning that financial institutions using AI chatbots that provide consumers with legally required disclosures or respond to billing error disputes and credit reporting complaints must ensure those AI systems accurately fulfill legal obligations. The circular cautioned that AI chatbots failing to provide adequate responses to consumer inquiries about legal rights, account terms, and dispute resolution may violate existing consumer financial protection laws. It signaled that the CFPB would hold financial institutions responsible for AI system failures that harm consumers, regardless of whether the AI operates as intended by its developers.

4 March 2024FinanceGenerative AIData Privacy & Protection
↗ Link available
Executive Order✓ Official

Executive Order on Protecting Children from AI Harms

This executive order directed federal agencies to assess and address risks to children from AI and other emerging technologies, including AI-generated child sexual abuse material and AI systems that facilitate the exploitation or harm of minors. It instructed the Attorney General and Homeland Security Secretary to develop strategies for combating AI-facilitated crimes against children and directed OSTP to study AI risks to children's cognitive and social development. The order also called for the development of age-appropriate AI safety standards and parental notification requirements for AI products used by minors.

21 February 2025National StrategyData Privacy & Protection
↗ Link availableFull text
Law / Act✓ Official

Algorithmic Accountability Act of 2022

Introduced in the Senate, the Algorithmic Accountability Act of 2022 would have required large companies to conduct impact assessments of automated decision systems for biases and risks to privacy, security, accuracy, and discrimination. The bill did not pass but represented a significant legislative proposal for comprehensive federal AI accountability legislation and influenced state-level legislation in Colorado, Virginia, and other states. It was reintroduced in the 118th Congress in 2023 and continued to serve as a benchmark for federal AI accountability legislation discussions.

1 November 2022Data Privacy & ProtectionLabor & Workforce
↗ Link availableFull text
Law / Act✓ Official

CHIPS and Science Act — National AI Research Resource (NAIRR) Provisions

The CHIPS and Science Act of 2022 included provisions directing the National Science Foundation and OSTP to establish a National AI Research Resource (NAIRR) to democratize access to AI computing resources, datasets, and software tools for academic researchers and other non-commercial entities. It authorized a taskforce to design the NAIRR and provided initial funding, with the aim of reducing the concentration of AI research capabilities in large commercial companies. The NAIRR pilot program subsequently launched in January 2024.

12 December 2022National StrategyEducation
↗ Link availableFull text
Policy / Guidance✓ Official

National AI Research Resource (NAIRR) Pilot Launch

The National Science Foundation and OSTP launched the National AI Research Resource pilot, providing U.S. researchers and educators with access to computational resources, data, software, models, and training to advance AI research and democratize participation in AI development. The pilot brought together over 10 federal agencies and 25 private sector and non-profit partners to provide resources to academic and non-profit researchers. The NAIRR pilot was designed as a proof-of-concept for a permanent national AI research infrastructure.

26 January 2024National StrategyEducation
↗ Link available
National Strategy✓ Official

Department of Transportation Automated Vehicle Comprehensive Plan

The U.S. Department of Transportation released its Automated Vehicle Comprehensive Plan, establishing a framework for federal engagement with autonomous vehicle technology across safety, infrastructure, connectivity, and equity dimensions. The plan outlined DOT's approach to facilitating the safe testing and deployment of autonomous vehicles on public roads while setting expectations for safety reporting, data sharing, and public engagement. It directed DOT operating administrations including NHTSA, FHWA, and FTA to coordinate on AV policy and to develop updated safety frameworks appropriate for increasingly automated vehicle systems.

7 November 2023Autonomous VehiclesNational Strategy
↗ Link available
Policy / Guidance✓ Official

NHTSA Automated Driving Systems: A Vision for Safety 2.0

The National Highway Traffic Safety Administration published voluntary guidance for the safe development and deployment of automated driving systems, providing a framework for manufacturers and other entities to assess and demonstrate the safety of their self-driving vehicle technologies. The guidance encouraged industry to use a Safety Management System approach and to produce voluntary safety self-assessments for public disclosure. NHTSA also updated its Standing General Order requiring manufacturers to report crashes involving automated driving systems and advanced driver assistance systems.

4 February 2022Autonomous Vehicles
↗ Link available
Court Case✓ Official

United States v. Farris

Lawyer used Westlaw CoCounsel in proceedings before the CA 6th Cir.. Fabricated: Doctrinal Work | AI-generated quoted commentary attributed to U.S.S.G. § 3B1.1 cmt. n.1 did not appear in the cited commentary; the court could not locate similar language in any authority. Outcome: Counsel disqualified with no compensation for time served; Briefs locked; Bar Referral; Notice of Opinion.

Court: CA 6th Cir.Party: LawyerTool: Westlaw CoCounsel
Yes
3 April 2026Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

Stanford v. Leinart

Pro Se Litigant appeared before the CA Texas. Fabricated: Case Law | Appellant cited a nonexistent opinion "Anderson v. Hood" in his brief; the court found the citation fabricated and likely AI-generated and admonished that citation of nonexistent cases is unacceptable. Outcome: Admonishment.

Court: CA TexasParty: Pro Se Litigant
No
2 April 2026Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

Kevin D. Turnage v. Robert F. Kennedy, Jr., et al.

Pro Se Litigant appeared before the D. Arizona. Misrepresented: Legal Norm | Plaintiff purports to quote 29 C.F.R. § 1614.110(b) for a 180-day decision requirement, but that language does not appear; court notes the correct provision is likely 29 C.F.R. § 1614.106(e)(2). Outcome: Warning.

Court: D. ArizonaParty: Pro Se Litigant
No
2 April 2026Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available