Welcome to the Center for AI Policy! We’re working with Congress and federal agencies to help them understand advanced AI development and effectively prepare for the catastrophic risks that AI could pose. Below is our model legislation, the Responsible Advanced Artificial Intelligence Act of 2024 (RAAIA). The model legislation contains several key policies for requiring that AI be developed safely, including permitting, hardware monitoring, civil liability reform, a dedicated government office, and emergency powers. This model legislation corresponds to Line of Effort 4 (LOE4) in Gladstone AI's Action Plan.
The Center for AI Policy appreciates Gladstone AI's work in reaching out to over 45 government offices, non-profits, and AI labs and sharing what they learned in this comprehensive and thoughtful plan. While we do not necessarily agree with every suggestion made in the 284-page report, on the whole we are proud to be working alongside Gladstone AI to promote AI safety.
We hope that Congressional offices and others interested in AI policy will be able to use part or all of this model legislation to inform their approach to AI safety. We are available to meet with Congress to explain the reasoning behind these policies and to help adapt portions of the model legislation to meet each office's needs -- please contact us at info@aipolicy.us to learn more. Together, we can create a world where AI is safe enough that we can enjoy its benefits without undermining humanity's future.
Read the model legislation here.
CAIP supports these reporting requirements and urges Congress to explicitly authorize them
Response to the initial public draft of NIST's guidelines on misuse risk
CAIP calls on House leadership to promptly bring these bills to the floor for a vote