New Biden Administration order compels AI companies to share safety data

Oct. 31, 2023
NIST will set the standard on the kinds of safety tests that companies must undertake before AI systems are released

President Joe Biden on Monday issued an executive order that sets the standard on safety, security and transparency of artificial intelligence systems, requiring companies developing them to share details of the systems’ safety tests with the U.S. government.

Companies that train AI through so-called foundational models, which may pose national security and public health risks depending on how they’re deployed, must reveal the safety results to U.S. agencies, the order states.

Foundational models refer to AI systems that ingest large quantities of data and are then capable of being deployed in a wide variety of applications. Many AI systems that use natural language processing, and those suggesting the creation of new chemical compounds, use foundational models.

Additionally, any company working on life science projects that receives federal funding must develop new standards to ensure that the addition of AI technologies would not be used to engineer dangerous biological materials.

The National Institute of Standards and Technology will set the standard on the kinds of safety tests that companies must undertake before AI systems are released. The Department of Homeland Security will then apply such standards for systems used in critical infrastructure sectors, and will establish an AI Safety and Security Board.

The Energy and Homeland Security departments will assess threats posed by AI systems to companies operating critical facilities such as energy, water, pipeline and other sectors.

Biden’s order uses the authority the president has under the Defense Production Act that has been routinely used by presidents to oversee U.S. companies for the purpose of national security. The law was used by former President Donald Trump during the COVID-19 pandemic to control exports of medical goods and increase production of critical supplies.

The executive order codifies voluntary commitments made by top U.S. companies with the White House in July this year. Amazon.com Inc., Anthropic, Google LLC, Inflection, Meta Platforms Inc., Microsoft Corp., and OpenAI Inc. pledged to develop technologies in a “safe, secure, and transparent” manner.

Congress has been working on developing a legislative framework to address safety and security of AI systems. Senate Majority Leader Charles E. Schumer, D-N.Y., has held a series of briefings for lawmakers to better prepare them for legislating on the topic. Legislative aides have said proposed legislation could emerge by the end of the year.

In September, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., proposed a framework that included creating a new federal oversight agency for AI. Companies developing generative AI models like ChatGPT would be required to register with the oversight body, which then would have the authority to issue licenses and also have the authority to gather data on adverse incidents.

___

©2023 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com.

 Distributed by Tribune Content Agency, LLC.