Within the next 3-10 years, AI companies could develop "superintelligent" AI: AI that is vastly smarter and more powerful than humans.
AI progress can occur rapidly.
AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
Top AI companies admit that their current practices could be insufficient for handling anticipated future AIÂ systems.
Solving safety research questions requires time, and unchecked competitive pressures could compel companies to prioritize profits over safety.
We need to prevent an AI arms race so that we have enough time to solve safety challenges before building catastrophically powerful systems.
This requires:
That's why we’re calling for:
Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities
No one man, woman, or machine should have this much power over the future of AI
It’s time for Congress to act