Policy Proposals
We need to be able to safeguard AI development. We're calling for legislation that would build this government capacity.
Recommendations
- Developing the government’s capacity to evaluate & forecast AI’s capabilities.
- Fund NIST and the BIS.
- Require high-risk AI developers to apply for permits and follow safety standards.
- Hold high-risk AI developers strictly liable for severe harms.
- Empower regulators to pause AI projects if they identify a clear emergency.
Why we need these policies
- Many experts believe that AI poses catastrophic risks.
- Preparing for these risks is urgent.
- Experts believe that systems capable of developing biological weapons are 2-3 years away, and AI systems that are vastly more powerful than humans may be less than a decade away.
- We run severe risks by waiting until we can see catastrophically dangerous AI before reacting.
- Companies and governments aren’t prepared for an AI emergency.
- Our policies are aimed at solving these two problems:
- Companies need to prioritize safety and security.
- The government needs the capacity to rapidly identify and respond to catastrophic AI risks.