The US Has Committed to Spend Far Less Than Peers on AI Safety

Claudia Wilson
,
September 23, 2024

In September’s House Committee on Science, Space, and Technology AI markup, Congresswoman Haley Stevens (D-MI) made a salient point. The US is punching below its weight when it comes to funding its AI Safety Institute (AISI). Compared to peers such as the UK, EU, Canada, and Singapore, the US has announced the least funding so far, with a maximum of $10 million authorized for FY24/25. Even if you annualize this funding, the US AISI funding is middling at best. Plus, these are not ‘apples to apples’ comparisons - the US invests far more in AI than any other country with an AISI. In other words, the US’s investment in safety simply doesn’t match its investment in advancement. 

Why does this matter? One of AISI’s responsibilities is to advance a very nascent field - AI safety evaluations. We often hear that it’s too soon to introduce mandatory evaluations of AI because the methodology is yet to be perfected. However, without sufficient resources and planning, the US AISI is unlikely to fulfill this mission.

The opportunity to deploy these capabilities is imminent. OpenAI and Anthropic have already agreed for AISI to conduct evaluations on their models. BIS has also issued a proposed rule around the reporting of any red-teaming on advanced models informed by AISI guidelines. Yet, AISI is housed in a leaky building and has nearly 4 times less funding than Tom Brady’s annual salary at Fox. 

Beyond the likelihood for insufficient risk mitigation, this lack of investment and strategy also impedes US leadership on international standards. If international leadership is truly an issue that the Executive Branch and Congress purport to care about, then it should be funded accordingly. 

Notes

1. Total funding announced so far, sourced from International Center for Future Generations (ICFG) and converted into USD. France and Japan have yet to announce their safety institute budgets. Other countries such as Korea and Australia are planning to set up similar safety institutes.

2. Private sector Investment funding sourced from Stanford AI Index. Duration of AISI funding sourced from UK Department for Science, Innovation, and Technology; ICFG; Infocomm Media Development Authority (IMDA). EU duration for “initial budget” was not specified, so assumed to be 5 years. Total AISI funding sourced from International Center for Future Generations (ICFG) and converted into USD.

Preparedness: Key to Weathering Tech Disasters

Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities

October 10, 2024
Learn More
Read more

Sam Altman’s Dangerous and Unquenchable Craving for Power

No one man, woman, or machine should have this much power over the future of AI

October 9, 2024
Learn More
Read more

The Need for AI Safety Has Bipartisan Consensus at the Highest Ranks

It’s time for Congress to act

October 1, 2024
Learn More
Read more