Takeaways From July 25 Senate Judiciary Hearing

August 4, 2023

Our takeaways from July 25 Senate Judiciary Hearing

On July 25, 2023, the Senate Judiciary Subcommittee on Privacy, Technology, and Law held a hearing titled “Oversight of AI: Principles for Regulation.” The Center for AI Policy commends the witnesses and committee members for their substantive discussion of existential risks from AI, as well as policy proposals intended to reduce those risks.

We have selected some quotes about global security risks from AI. The witnesses and committee members also discussed a variety of other topics, including concerns about data privacy, misinformation, and jobs. You can listen to the full hearing here.

Senator Richard Blumenthal (D-CT)

“What I have heard again and again and again… is "scary"... An intelligence device out of control, autonomous, self-replicating, potentially creating diseases, pandemic-grade viruses or other kinds of evils purposely engineered by people or simply the result of mistakes, not malign intention.”

“You have provided objective, fact-based views on what the dangers are... potentially even human extinction.”

“I’ve come to the conclusion that we need some kind of regulatory agency.”

“A number of you have put the timeline at two years before we see some of the biological [dangers and] most severe dangers. It may be shorter because the pace of development is not only stunningly fast, it is also accelerated at a stunning pace because of the quantity of chips, the speed of chips, the effectiveness of algorithms.”

“Superhuman AI evokes for me artificial intelligence that could on its own develop a pandemic virus, on its own decide Joe Biden shouldn’t be our next president… And I think that argues for urgency.”

Senator Josh Hawley (R-MO)

“I’m less interested in the [AI] corporations’ profitability. In fact, I'm not interested in that at all. I’m interested in protecting the rights of American workers and American families and American consumers against these massive companies that threaten to become a total law unto themselves.”

“Will the Senate actually act? … We’ve had a lot of talk, but now is the time for action. And I think if the urgency of the new generative AI technology does not make that clear to folks, then you’ll never be convinced.”

Dario Amodei, CEO of Anthropic

“A straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, enabling many more actors to carry out large-scale biological attacks. We believe this represents a grave threat to U.S. national security.”

“New AI models should have to pass a rigorous battery of safety tests before they can be released to the public at all, including tests by third parties and national security experts in government.”

Yoshua Bengio, Recipient of the Turing Award

“Require licenses… and restrict AI systems with unacceptable levels of risk.”

“[We need] to bring expertise in national security, in bioweapons, chemical weapons, and AI people together. [These organizations] shouldn’t be for profit… We shouldn’t mix the objective of making money… with the objective of defending humanity against a potential rogue AI.”

“This research in AI and international security should be conducted with several highly secure and decentralized labs operating under multilateral oversight to mitigate an AI arms race… We must therefore allocate substantial additional resources to safeguard our future, at least as much as we are collectively globally investing in increasing the capabilities of AI.”

“Viruses, computer or biological viruses don’t see any border. So, we need to make sure there’s an international effort in terms of these safety measures. We need to agree with China on these safety measures... And we need to work with our allies on these countermeasures.”

Stuart Russell, Professor of Computer Science at UC Berkeley

“Alan Turing, the founder of Computer Science, warned in 1951 that once AI outstrips our feeble powers, we should have to expect the machines to take control. We have pretty much completely ignored this warning.”

“Systems that break the rules must be recalled from the market for anything from defaming real individuals to helping terrorists build biological weapons.”

“Now, developers may argue that preventing these behaviors is too hard because LLMs have no notion of truth and are just trying to help. This is no excuse. Eventually, and the sooner the better, I would say, we will develop forms of AI that are provably safe and beneficial… Until then, we need real regulation and a pervasive culture of safety.”

“I think there’s no doubt that we’re going to have to have an agency.”