We need to be able to safeguard AI development. We're calling for a Responsible AI Act that would build this government capacity.
- Establish a federal authority to regulate frontier AI development.
- Track the distribution and flow of high-performance AI hardware.
- Require frontier AI developers to apply for a license and follow safety standards.
- Hold frontier AI developers strictly liable for severe harms (over $100M).
- Give the federal authority the power to pause frontier AI development in case of an emergency.
Why we need these policies
- Experts believe AI poses catastrophic risks.
- Managing these risks is urgent.
- Experts believe that systems capable of developing biological weapons are 2-3 years away, and AI systems that are vastly more powerful than humans may be less than a decade away.
- Companies and governments aren’t prepared for an AI emergency.
- Our policies are aimed at solving these two problems:
- Companies need to prioritize safety and security.
- The government needs the capacity to rapidly identify and respond to AI risks.
How these policies would work
- Congress would establish a federal authority to regulate frontier AI development.
- The authority would license the following activities:
- stockpiling a large cluster of AI hardware,
- developing or deploying a new AI system on the frontier of research, and
- accessing the model weights of frontier AI systems.
- Developers would submit evidence to a group of neutral experts employed by the administration. The experts would evaluate potential risks and ensure they’re kept below an acceptable threshold.
- To promote innovation, the authority would provide “fast track” approval for systems that clearly pose no major security risks.
- If the authority identified a clear AI emergency, it would have the capacity to halt unsafe AI development.
What we mean by frontier AI
- We’re focused on AI systems that are powerful enough to pose substantial risks to public safety or international security. No short definition can perfectly capture all the systems – and only the systems – that could pose these risks. As a starting point, we recommend a definition of frontier AI that includes machine learning models that meet one of the following criteria:
- Computational resources used in training: >10^24 FLOP
- Parameter count: >80B
- Cost of training: >$50M
- Benchmark performance: >70% on MMLU or >1300 on the SAT
- Current examples include OpenAI’s GPT-4, Google's PaLM 2, and Anthropic’s Claude 2. These models are at the frontier of 2023 AI research, hence the term "frontier" AI.
- Currently, frontier AI provides only a small fraction of AI’s impact on the economy – most applications of AI either don’t use frontier AI or would be rapidly exempted through the fast track process. Like with any new technology, the landscape of frontier AI is rapidly changing. The administration would update its definition of frontier AI as this landscape evolves.