Within the next 3-10 years, AI companies could develop "superintelligent" AI: AI that is vastly smarter and more powerful than humans.
AI progress can occur rapidly.
AI systems are already being used to improve coding and engineering efforts. They also show signs of dangerous capabilities, such as hacking, weapons design, persuasion, and strategic planning.
Top AI companies admit that their current practices could be insufficient for handling anticipated future AIÂ systems.
Solving safety research questions requires time, and unchecked competitive pressures could compel companies to prioritize profits over safety.
We need to prevent an AI arms race so that we have enough time to solve safety challenges before building catastrophically powerful systems.
This requires:
That's why we’re calling for:
Why is there still inaction in Congress?
Unfortunately, not all AI agent applications are low-risk
Many voters prefer a careful, safety-focused approach to AI development over a strategy emphasizing speed and geopolitical competition