Early this morning, TechPolicy reported that the U.S. House of Representatives passed the "HR1" bill, which will prohibit U.S. states from regulating AI for the next 10 years.
During the prohibition period, no state or its political subdivision may enforce any law or regulation that regulates AI models, AI systems, or automated decision systems.
This bill is very important for tech giants such as Microsoft, OpenAI, Google, and Amazon, as it completely unlocks their potential, allowing for more AI technology innovation attempts.
In fact, the passage of this bill was quite dramatic, barely making it through with 215 votes in favor and 214 votes against.
Supporters stated that the bill would end the confusing patchwork of state AI laws emerging across the U.S., allowing Congress space to enact its own AI legislation while maintaining America's leadership position.
Opponents, however, believe it is a dangerous compromise with tech companies that will leave consumers, especially vulnerable groups and children, unprotected, and abolish a range of state-level laws covering everything from deepfakes to discrimination in automated hiring.
"AIGC Open Community" reviewed the HR1 bill released by the U.S. Congress, finding its content to be very extensive. Below, we will interpret the AI regulation section for you. In fact, the full name of this bill is quite interesting: HR1 — A Great and Beautiful Bill. It can be simply called the "Big Beautiful" bill~~
To establish the basic framework for AI regulation in the U.S. for the next decade, its core content revolves around "suspending state-level regulation" and "guiding federal funding," aiming to quickly enhance U.S. global AI competitiveness through policy relaxation and resource allocation.
The bill explicitly stipulates that for 10 years from its enactment, U.S. states and local governments may not enforce any laws or regulations targeting AI models, AI systems, or automated decision systems.
This prohibition covers the entire chain from algorithm design to system deployment, rendering existing state AI-related regulatory measures, such as California's algorithmic transparency law and New York's facial recognition ban, temporarily ineffective.
However, the bill also leaves room for policy; if a state regulation's core purpose is to promote AI deployment, such as simplifying licensing processes or providing tax incentives, or if it applies non-discriminatorily to all technology systems, such as general data security standards, it may continue to be implemented. Additionally, it allows for the collection of reasonable cost fees treated equally to non-AI technologies, such as AI system safety testing fees, to avoid indirect discrimination.
To complement the regulatory relaxation, the bill authorizes an allocation of $500 million to the Department of Commerce, with funds available from fiscal year 2025 and continuing until 2035. This funding is primarily for three core tasks: first, deploying commercial AI and automation technologies to replace outdated government business systems, such as tax and social security management systems, to improve data processing efficiency and security;
second, promoting the integration of AI with cloud computing, IoT, and other technologies to explore smart government scenarios, such as predictive public services and optimized emergency response;
third, establishing a cross-agency AI governance framework to coordinate the technological needs of federal departments like the Department of Defense and Department of Energy to avoid redundant development. This initiative aims to leverage federal demonstration effects to encourage increased AI investment by the private sector, forming an industrial ecosystem of "policy relaxation + financial support."
However, the bill's definitions of key terms leave room for future enforcement: "AI model" refers to software components that generate output through machine learning and statistical algorithms, such as GPT-like language models and image recognition algorithms.
"Automated decision system" covers any AI tool that replaces human decision-making, such as hiring algorithms and credit approval models, but it does not explicitly state whether it includes embedded AI devices like smart homes. "State-level regulation" only restricts the "enforcement" of existing regulations and does not prohibit state governments from enacting new policies.
At the same time, the enforcement details emphasize the principle of federal precedence; if a state regulation conflicts with federal law, federal law shall prevail. For example, if the federal government issues a nationwide AI privacy law, states must directly apply it and may not retain stricter regulatory provisions.
This bill also reflects the U.S.'s strategy of "develop first, then govern" for AI regulation, aiming to enhance its global AI leadership and hoping that tech companies will make good use of this 10-year regulatory window.
The material for this article is sourced from the U.S. Congress. Please contact us for removal if there is any infringement.
END