Influential voices like NVIDIA and IBM have suggested regulating AI based on specific use cases and asking existing regulators to oversee AI being used in each industry, with airline regulators tackling airline AI, medical regulators tackling medical AI, etc. This method fails to address the unique risks inherent in new general-purpose AIs (GPAIs) like GPT-4, namely: misuse across a broad array of use cases, unprecedented rapid progress, and rogue systems that evade control.
To properly address these risks and keep the American public safe, we need to establish a central regulator which will:
One promising framework for a central regulator is a tiered approach that categorizes models according to indicators of capabilities, and scales regulatory burden with capabilities.
Regulating by use case made sense as recently as 5 years ago, when essentially all AIs were tailored to narrow circumstances and unable to accomplish tasks outside those circumstances. When AIs were narrowly tailored, we could manage AI risk well by identifying the riskiest use cases and holding AI to higher standards in those domains. However, today’s general-purpose AIs (GPAIs) are importantly different from the narrowly tailored AIs of the past and pose three unique challenges which must be accounted for with general-purpose regulation.
Read the piece in full here.
Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities
No one man, woman, or machine should have this much power over the future of AI
It’s time for Congress to act