Why AI policy
This decade, AI could be powerful enough to cause global catastrophes
- Within the next 3-10 years, AI projects could develop superintelligent AI: AI that is vastly smarter and more powerful than humans.
- OpenAI: “While superintelligence seems far off now, we believe it could arrive this decade.”
- Progress towards superintelligence has been rapid.
- In 2011, the best language models produced only gibberish.
- In 2019, GPT-2 became one of the first models to consistently write coherent sentences
- In 2023, GPT-4 outperformed ~90% of test takers on the SAT
- For more detailed analyses on AI progress, we recommend Epoch AI’s literature review, the biological anchors report, and the compute-centric takeoff speeds framework.
- AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
- Experts warn AI systems will soon be able to engineer pandemics, orchestrate novel cyberattacks, and disrupt critical infrastructure.
- AI companies themselves warn that "the vast power of superintelligence… could lead to the disempowerment of humanity or even human extinction."
Developers don’t know how they will control very powerful AI.
- Top AI companies admit that no one knows how to control powerful AI systems.
- This is both a safety and a security issue.
- AI systems could be misused to cause grievous harm.
- AI systems themselves could also get out of developers’ control.
- Solving safety research questions requires time.
- Yet unchecked competitive pressures compel companies to prioritize profits over safety.
- We need to prevent an AI arms race so that we can solve safety before building catastrophically powerful systems.
The government needs to be able to safeguard AI development.
- The government needs the capacity to rapidly identify and respond to AI risks. This requires:
- More visibility into frontier AI development and technical expertise
- Clear mechanisms to halt unsafe development in case of an emergency.
- Better incentives that support developers prioritizing safety from the outset.
- This is why we’re calling for
- Monitoring of the advanced hardware used to train new AIs. If we know where that hardware is located, we will be more prepared for AI-related emergencies.
- Monitoring of frontier AI development to give the government transparency into the development of these systems.
- License frontier AI development, to incentivize AI safety research and allow for swift intervention in hazardous situations.
- Strict liability for severe harms caused by AI systems, promoting accountability and improving incentivizes
- See more here.
Suggested readings
NAIAC public comment
2023-11-08CAIP's comment to the National AI Advisory Committee
Read more
Strengths of Hawley and Blumenthal's framework
2023-10-24AI safety legislation that America desperately needs
Read more
Takeaways from July 25 Senate Judiciary Hearing
2023-08-04Our takeaways from July 25 Senate Judiciary Hearing
Read more