Bolstering government so we can manage powerful AI.

AI will be incredibly transformative. We’re not prepared for its risks. We’re working with policymakers to solve this problem.

Our work

Advising Policymakers

We’re working with Congress and federal agencies to help them understand advanced AI development and effectively prepare for it. We create resources, host events, and connect policymakers with the stakeholders they need to hear from.

We don't just talk about risks. We develop and advocate for solutions.
We share policy proposals, draft model legislation, and give feedback on others' policies. This work is collaborative and iterative. We take in ideas from our network of leading researchers and practitioners to make recommendations that are both robust and practical. 

Developing solutions

We’re not just talking about AI risk; we’re helping find solutions. This work is collaborative and iterative. We work with our expert network of researchers and practitioners to design robust, practical policy solutions.

Hickenlooper on AI Auditing Standards

Qualified third parties should audit AI systems and verify their compliance with federal laws and regulations

June 13, 2024
Learn More
Read more

Apple Intelligence: Revolutionizing the User Experience While Failing to Confront AI's Inherent Risks

We hope that at their next product launch, Apple will address AI safety

June 11, 2024
Learn More
Read more

Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Aschenbrenner argues that AI systems will improve rapidly

June 4, 2024
Learn More
Read more
View our policy work

Our priorities

Our policy mission is simple:
require safe AI.

To ensure powerful AI is safe, we need effective governance. That’s why our policy recommendations focus on ensuring the government has enough:

  • Visibility and expertise to understand AI development
  • Adeptness and authority to respond to rapidly evolving risks
  • Infrastructure to support developers in innovating safely

Our Priorities

This work is collaborative and iterative. We take in ideas and feedback from our network of leading researchers and practitioners to make our recommendations both robust and practical. 

  • Build government capacity
  • Safeguard development
  • Mitigate extreme risk

As AI grows more capable, so do its risks. We must prepare governance now to keep pace. We are advocating policies to ensure the government has enough:

Visibility and expertise to understand AI development

Adeptness and authority to respond to rapidly evolving risks

Infrastructure to work with rather than against developers

  1. Visibility and expertise to understand AI development
  2. Adeptness and authority to respond to rapidly evolving risks
  3. Infrastructure to work with rather than against developers

Frequently asked questions

With AI advancing rapidly, we urgently need to develop the government’s capacity to rapidly identify and respond to AI's national security risks.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Tellus in metus vulputate eu scelerisque felis. Purus sit amet volutpat consequat mauris nunc congue nisi vitae.

Who makes up the CAIP team?

What is CAIP's mission?

What are CAIP’s funding sources and affiliations?

How can I get involved?