Strengths of Hawley and Blumenthal's Framework

October 24, 2023

The Blumenthal-Hawley Framework Outlines AI Safety Legislation that America Desperately Needs

Executive Summary

  • Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a bipartisan framework on AI legislation.
  • Microsoft President Brad Smith believes “this framework is a strong and positive step towards effectively regulating artificial intelligence.”
  • The Center for AI Policy (CAIP) seconds this positive sentiment.
  • We wholeheartedly support the following elements of the framework and urge lawmakers to codify them:
    • issuing licenses for advanced general-purpose AI models like GPT-4,
    • requiring developers to uphold common-sense safety practices,
    • gaining assessments of the AI landscape through an oversight body,
    • ensuring developers assume liability for harms from their systems,
    • structuring controls against unintended proliferation of advanced AI.
  • In this piece, we explain why these five provisions are necessary, and suggest directions for further progress.

1) Licensing Advanced General-Purpose AI

What the Framework Does

The framework’s first proposal is to “establish a licensing regime administered by an independent oversight body.” When developing “sophisticated general-purpose AI models (e.g., GPT-4),” companies would have to apply for and receive a license.

Why This is Needed

Advanced general-purpose AI models are already developing dangerous capabilities in scaling spear phishing campaigns, assisting non-experts in causing pandemics, and generating cunning strategies for mass murder. Further, experts anticipate that the “frontier” of AI capabilities will continue to expand at a rapid pace, and leading AI companies struggle to predict when future skills will emerge.

Given the unpredictable nature of forthcoming AI capabilities and the risks already observed, risk management cannot be optional. Without oversight, the incoming whirlwind of hazards poses a grave threat to American lives and global security.

Directions for Further Progress

Legislative text codifying the Hawley-Blumenthal framework must include specific criteria for the high-risk AI systems that need a license. These criteria should be developed in consultation with diverse stakeholders. As a starting point for these conversations, CAIP believes that systems surpassing any of the following thresholds should qualify as high risk:

The oversight body should adjust these thresholds over time, as it gains better knowledge of the risks and architectures of systems that surpass GPT-4. Importantly, the thresholds must adapt to account for improvements in the efficiency of learning algorithms—what matters is not just the number of computations an AI model utilizes, but the capability it derives from those computations. One possible approach can be seen in Anthropic, a leading AI lab, which considers “effective compute” in its Responsible Scaling Policy (RSP).

The body should also establish a “fast-track” exemption form to swiftly approve models that clearly pose no catastrophic risks, such as AI for autonomous transportation, demand forecasting, and fraud detection.

2) Requiring Safety Practices

What the Framework Does

The framework outlines several requirements for AI systems to receive a license, including programs for “risk management, pre-deployment testing, data governance, and adverse incident reporting.”

Why This is Needed

CAIP believes that runaway AI advancement could pose catastrophic risks within the next 3 to 10 years. Threats to national and global security may emerge from malicious use, rogue AI systems, structural risks, and black swans. Worryingly, within the wide-ranging spectrum of AI hazards, many experts warn of human extinction as a possibility.

Encouragingly, there are numerous best practices for reducing these risks and existing harms, including risk assessments, model evaluations, and red teaming. Government action is needed to ensure wide adoption of such measures, and to incentivize pressing work on unsolved problems in technical AI safety research.

Directions for Further Progress

The following three topics warrant particular attention when licensing powerful AI models. First, technical safeguards. The licensing body should consider the engineering techniques used to enhance safety, the effectiveness of these guardrails, and any significant limitations. This is especially important given the uncertainty of whether engineers will find methods for controlling future models: OpenAI believes that “our current alignment techniques will not scale to superintelligence.”

Second, information security. The licensing body should evaluate the information and model components that the developer intends to share, the intended audience for this property, and the measures taken to mitigate unauthorized access. Possible questions are available in Senators Hawley and Blumenthal’s letter to Meta after the company’s AI model leaked online.

Third, assessments of dangerous capabilities. The licensing body should understand the ways the model is able to cause harm, the strategies the developer used to identify these risks, and the model’s overall “controllability.” For instance, developers should supply analyses of their model’s competencies in malware generation, bioweapon design, autonomous replication, strategic deception, and more.

3) Assessing the AI Landscape

What the Framework Does

The framework calls for an AI oversight body that can “monitor and report on technological developments and economic impacts of AI.”

Why This is Needed

To effectively regulate AI, the government needs to understand the technology—since AI is evolving rapidly, there is a consistent need for up-to-date analyses. One benefit of these analyses is progress towards “smoke alarms” for AI. Government experts should be actively searching for potential risks, and strengthening America’s response capacity for AI-related emergencies.

To succeed in these goals, the government needs a regulatory body to serve as a trusted source of crucial information on AI.

Directions for Further Progress

The oversight entity should liaise with similar governmental organizations in other nations, such as the UK’s Frontier AI Taskforce, to exchange relevant information.

As RAND has argued, one metric that the entity should track is the distribution and flow of high-performance AI chips, which are critical for creating leading systems like GPT-4. By watching where these chips go, the government can understand which actors can train powerful AI models.

As one example, the best chip in the world for large-scale AI training retails for tens of thousands of dollars apiece, and wealthy AI companies are stockpiling thousands of units to build supercomputers. Parties involved in the trade of such powerful, specialized chips should be required to report transfers of ownership, enabling the government to maintain an accurate registry of high-performance AI chips.

4) Ensuring Liability for Harms

What the Framework Does

The framework calls for America to clearly define how AI companies will be held accountable for the harms they cause. “Congress should ensure that AI companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms.”

Why This is Needed

When developers face clearer consequences for reckless decisions, they have an increased incentive to ensure safety and security. Besides reducing risks, this encourages innovation in technical AI safety. The financial stakes of potential damages will compel companies to research and implement safety measures.

Moreover, AI liability is the will of the American people. 73% of voters believe that “AI companies should be liable for harms from technologies they create,” according to a poll last month. Only 11% think companies should not be held liable.

Directions for Further Progress

CAIP endorses joint and several liability for tangible harm caused by advanced AI systems. In severe cases of misuse by a malicious actor lacking sufficient assets, the original AI developer may escape financial penalty. Implementing joint and several liability allows for suing the developer as well, increasing the likelihood of repercussions for negligence and thereby deterring irresponsible behavior.

For similar reasons, CAIP suggests adopting an explicit duty of care for developers of advanced AI. Most states currently apply a complex multi-factor test when deciding whether a company owes a duty to the public to be reasonably cautious, and the results of these multi-factor tests are difficult to predict. To increase the certainty of penalties for carelessness, legislation should specify that anyone working on advanced, general-purpose AI must always be careful not to harm the public, and that if they do harm the public, they must pay damages.

Finally, CAIP recommends a strict liability regime for AI developers whose models cause over $100 million in tangible damages, such as wrongful death, physical injury, or property damage. Most civil lawsuits ask whether the defendant followed the “standard of care,” i.e., whether the defendant followed the textbook-approved procedures or detailed best practices for safely carrying out their work. But AI technology is changing at an exceptional rate, and novel AI systems often exhibit qualitatively different behavior; therefore, any sufficiently specific standard of care is vulnerable to growing obsolete within a few years. Strict liability solves this problem by removing the need to prove negligence. If an AI system causes over $100 million in damages, it is fair to assume that the developer was careless.

5) Controlling AI Proliferation

What the Framework Does

The framework calls for “export controls, sanctions, and other legal restrictions to limit the transfer of advanced AI models, hardware and related equipment” to US adversaries.

Why This is Needed

By default, AI models and research will spread rapidly around the globe. Due to the serious risks of AI advancement, especially in the hands of adversary nations, the US must take deliberate action to control the proliferation and growth of boundary-pushing AI capabilities.

If breakthroughs remain heavily concentrated in America, then it becomes easier for advanced general-purpose AI developers to avoid reckless racing. Private labs need to communicate openly about safety challenges and cooperate if there is a credible risk of large-scale harm, and this cooperation will be significantly more difficult if scientists in China, Iran, Russia, and so forth can quickly replicate every American breakthrough.

Directions for Further Progress

The Bureau of Industry and Security (BIS) is critical in maintaining America’s AI dominance: it is using export controls to cut China’s access to advanced AI chips. However, BIS has inadequate resources for robustly enforcing these controls. To prevent China from smuggling AI chips, a targeted increase in BIS funding is essential.

Separately, it's imperative to enforce robust information security requirements for cutting-edge AI developers, to defend America’s trailblazing innovations against unauthorized access by foreign adversaries and malicious actors.

Foreign tensions must not blind the US to the global-scale challenges of AI, and the urgent need for multilateral dialogues like the UK AI Safety Summit. To lay the building blocks for global cooperation, the US should lead the creation of a “Global AI Security Forum” (GAISF), and begin planning international AI institutions. America must lead the world in promoting AI safety—if the US barrels ahead incautiously and unilaterally, then it risks losing an AI race not to adversaries, but to AI itself.

Conclusion

CAIP commends Senators Blumenthal and Hawley for their AI policy framework and underscores our alignment on five essential components: licensing powerful models, mandating safety measures, tracking AI progress, holding developers accountable, and managing AI proliferation. Building upon this robust foundation, CAIP offers specific, targeted recommendations to further elevate the framework's efficacy. As the next step, CAIP calls on Congress to prioritize passing this framework into law.