Home United States In the United States, a bill supported by OpenAI could exempt AI...

In the United States, a bill supported by OpenAI could exempt AI giants from any…

4
0

OpenAI has expressed its support for a bill defeated in the state of Illinois. The text aims to protect AI labs from civil liability in cases where their models are involved in major disasters. This stance renews the debate on regulating a poorly supervised sector.

Code name: SB 3444. Behind this little-known legislative reference lies a bill defeated in the state of Illinois that could profoundly redefine the liability regime applicable to advanced AI labs, according to Wired.

Supported by OpenAI, the project aims to exempt developers of “advanced” AI models from any legal liability in case of “serious harm”, under strict conditions.

Unclear definitions

It remains to be seen what lies behind these very vague terms. According to the text, extreme scenarios, such as cases involving the death or serious injury of at least one hundred people, or material damage estimated at over one billion dollars, are concerned. The project identifies several sources of concern identified by the AI sector, including the malicious use of these systems to design chemical, biological, radiological, or nuclear weapons.

Thus, if an AI model adopts behavior that, if committed by a human, would constitute a criminal offense and lead to such extreme consequences, it could fall into the category of “critical harm.”

In these situations, companies could escape all liability… as long as they did not act intentionally or negligently and have published security and transparency reports. In other words, the text does not eliminate all obligations but organizes a form of regulated immunity for the most catastrophic cases. Very reassuring.

The definition of models covered is also very broad. Systems requiring over 100 million dollars in computations for training would be considered “advanced.” This means that most major players in the sector, such as OpenAI, Google, Anthropic, Meta, and xAI, would fall under the law. Quite convenient…

OpenAI’s explicit support

In this sensitive issue, OpenAI has chosen to step out of its usual reserve. The company has officially expressed its support for the bill, taking a more offensive line on the regulatory front.

“We support this approach because it prioritizes reducing the risks of serious harm associated with the most advanced AI systems while allowing this technology to be accessible,” said Jamie Radice, a spokesperson, in a statement.

This support marks a significant shift. Until now, OpenAI has mainly been known for its defensive posture. The company has often opposed bills that could expand the legal liability of sector companies. With SB 3444, the movement is more subtle. It is no longer just about resisting regulation but about contributing to defining it.

The text is a boon for OpenAI, embroiled in scandals. The project focuses only on the most extreme cases, carefully overlooking individual harm. Last year, several families of teenagers who committed suicide filed a lawsuit against the company, alleging that ChatGPT could have facilitated unhealthy relationships or a form of dependence. Ongoing cases illustrate risks on a more intimate scale.

Various experts noted by Wired also underline the unique character of the text, judging it more ambitious or permissive, depending on the perspective, than previous initiatives supported by the labs themselves.

Towards a federal framework?

During her testimony in favor of the bill, Caitlin Niedermeyer, a member of OpenAI’s international affairs team, emphasized the need for consistent federal-level regulation. The aim? To avoid a proliferation of inconsistent local rules and promote a single federal framework.

“At OpenAI, we believe that the guiding principle of regulating advanced technologies should be the safe deployment of the most advanced models while preserving American leadership in innovation,” summarized the executive.

This position aligns with a widely shared vision in Silicon Valley. Several companies fear that a stack of state regulations may hinder innovation and weaken American competitiveness in the global race for artificial intelligence, especially against China. An opinion also shared by Donald Trump.

Although SB 3444 is a state law on security itself, Caitlin Niedermeyer argued that these laws can be effective if they “promote harmonization with federal systems.”

An uncertain future

For now, this bill comes in uncertain times. Despite several initiatives from Donald Trump’s administration, discussions in Congress are struggling to progress.

Meanwhile, some states are moving forward on their own. California and New York have already adopted their own rules, especially to enforce more transparency for companies. Illinois seems to be moving in a different direction. This dynamic accentuates the fragmentation of the American regulatory landscape.

However, the text raises strong reservations. Opponents believe that SB 3444 has little chance of being adopted in a state like Illinois, despite its track record of strict technology regulation. Scott Wisor, policy director of the Secure AI project, believes the initiative goes against local expectations and enjoys little political support. Last August, this state became the first in the country to adopt a law limiting the use of AI in mental health services.

According to him, a large majority of citizens oppose the text.

Between the need for regulation, innovation imperatives, and growing concerns about systemic risks associated with AI, the debate remains wide open in the United States.