Policy Proposals

Policy Proposals

We need to be able to safeguard AI development. We're calling for legislation that would build this government capacity.

Recommendations

  • Developing the government’s capacity to evaluate & forecast AI’s capabilities.
  • Fund NIST and the BIS.
  • Require high-risk AI developers to apply for permits and follow safety standards.
  • Hold high-risk AI developers strictly liable for severe harms.
  • Empower regulators to pause AI projects if they identify a clear emergency.

Why we need these policies

  • Many experts believe that AI poses catastrophic risks.
  • Preparing for these risks is urgent.
    • Experts believe that systems capable of developing biological weapons are 2-3 years away, and AI systems that are vastly more powerful than humans may be less than a decade away.
    • We run severe risks by waiting until we can see catastrophically dangerous AI before reacting.
  • Companies and governments aren’t prepared for an AI emergency.
    • Companies admit they don’t know how they will reliably secure and control very powerful AI.
    • The government needs more talent and better mechanisms to identify and rapidly respond to AI-related crises.
  • Our policies are aimed at solving these two problems:
    • Companies need to prioritize safety and security.
    • The government needs the capacity to rapidly identify and respond to catastrophic AI risks.