Release: Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence‍

April 9, 2024

Center for AI Policy Releases Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence

The "Responsible Advanced Artificial Intelligence Act of 2024" sets industry regulations and public safety standards.

WASHINGTON - April 9, 2024 - To ensure a future where artificial intelligence (AI) is safe for society, the Center for AI Policy (CAIP) today announced its proposal for the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility.

"This model legislation is creating a safety net for the digital age," said Jason Green-Lowe, Executive Director of CAIP, "to ensure that exciting advancements in AI are not overwhelmed by the risks they pose."

The "Responsible Advanced Artificial Intelligence Act of 2024" is model legislation that contains provisions for requiring that AI be developed safely, as well as requirements on permitting, hardware monitoring, civil liability reform, the formation of a dedicated federal government office, and instructions for emergency powers.

The key provisions of the model legislation include:

  1. Establishment of the Frontier Artificial Intelligence Systems Administration to regulate AI systems posing potential risks.
  2. Definitions of critical terms such as "frontier AI system," "general-purpose AI," and risk classification levels.
  3. Provisions for hardware monitoring, analysis, and reporting of AI systems.
  4. Civil and criminal liability measures for non-compliance or misuse of AI systems.
  5. Emergency powers for the administration to address imminent AI threats.
  6. Whistleblower protection measures for reporting concerns or violations.

The model legislation intends to provide a regulatory framework for the responsible development and deployment of advanced AI systems, mitigating potential risks to public safety, national security, and ethical considerations.

"As leading AI developers have acknowledged, private AI companies lack the right incentives to address this risk fully," said Jason Green-Lowe, Executive Director of CAIP. "Therefore, for advanced AI development to be safe, federal legislation must be passed to monitor and regulate the use of the modern capabilities of frontier AI and, where necessary, the government must be prepared to intervene rapidly in an AI-related emergency."

Green-Lowe envisions a world where "AI is safe enough that we can enjoy its benefits without undermining humanity's future." The model legislation will mitigate potential risks while fostering an environment where technological innovation can flourish without compromising national security, public safety, or ethical standards. “CAIP is committed to collaborating with responsible stakeholders to develop effective legislation that governs the development and deployment of advanced AI systems. Our door is open.”

Access the "Responsible Advanced Artificial Intelligence Act of 2024" text here, a one-page executive summary here, and a section-by-section explainer here.

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards. More @ aipolicy.us.