Why AI policy

Why AI policy

← See all blog posts

This decade, AI could be powerful enough to cause global catastrophes

  • Within the next 3-10 years, AI projects could develop superintelligent AI: AI that is vastly smarter and more powerful than humans.
    • OpenAI: “While superintelligence seems far off now, we believe it could arrive this decade.”
  • Progress towards superintelligence has been rapid.
  • AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
  • Experts warn AI systems will soon be able to engineer pandemics, orchestrate novel cyberattacks, and disrupt critical infrastructure.
  • AI companies themselves warn that "the vast power of superintelligence… could lead to the disempowerment of humanity or even human extinction."

Developers don’t know how they will control very powerful AI.

The government needs to be able to safeguard AI development.

  • The government needs the capacity to rapidly identify and respond to AI risks. This requires:
    • More visibility into frontier AI development and technical expertise
    • Clear mechanisms to halt unsafe development in case of an emergency.
    • Better incentives that support developers prioritizing safety from the outset.
  • This is why we’re calling for
    • Monitoring of the advanced hardware used to train new AIs. If we know where that hardware is located, we will be more prepared for AI-related emergencies.
    • Monitoring of frontier AI development to give the government transparency into the development of these systems.
    • License frontier AI development, to incentivize AI safety research and allow for swift intervention in hazardous situations.
    • Strict liability for severe harms caused by AI systems, promoting accountability and improving incentivizes
    • See more here.