Broadening AI Regulation

January 3, 2024

Executive Summary

Influential voices like NVIDIA and IBM have suggested regulating AI based on specific use cases and asking existing regulators to oversee AI being used in each industry, with airline regulators tackling airline AI, medical regulators tackling medical AI, etc. This method fails to address the unique risks inherent in new general-purpose AIs (GPAIs) like GPT-4, namely: misuse across a broad array of use cases, unprecedented rapid progress, and rogue systems that evade control.

To properly address these risks and keep the American public safe, we need to establish a central regulator which will:

  • reduce government waste and needless redundancies,
  • bring leadership necessary for coordination,
  • facilitate effective, risk-focused, pre-deployment regulation,
  • introduce much-needed proactivity into AI regulation, and
  • account for novel AI capabilities that fall outside existing regulators.

One promising framework for a central regulator is a tiered approach that categorizes models according to indicators of capabilities, and scales regulatory burden with capabilities.

What has changed?

Regulating by use case made sense as recently as 5 years ago, when essentially all AIs were tailored to narrow circumstances and unable to accomplish tasks outside those circumstances. When AIs were narrowly tailored, we could manage AI risk well by identifying the riskiest use cases and holding AI to higher standards in those domains. However, today’s general-purpose AIs (GPAIs) are importantly different from the narrowly tailored AIs of the past and pose three unique challenges which must be accounted for with general-purpose regulation.

Read the piece in full here.