In her Fortune Eye on AI newsletter, Sharon Goldman rightly asks: "In an era in which the power to shape AI may ultimately be concentrated in the hands of the wealthiest tech companies…who gets to decide whether and what kinds of AI systems are safe and secure?"
Right now, the answer is: Big Tech gets to decide. Nobody else has a veto.Â
Despite all the panels and committees like the Department of Homeland Security’s new AI Safety and Security Board, nobody in the US government actually has the power to tell tech companies that they’re not allowed to release a new AI because it isn’t safe enough. Yes, it’s a problem that a Board designed to advise the government about how to protect essential infrastructure has 10 AI companies and 0 utility companies – but it’s an even bigger problem that Congress isn’t on track to pass a comprehensive AI safety bill.
This abdication of authority doesn’t line up with what American voters want. Research released by S&P Global Market Intelligence revealed that while consumers acknowledged the practical applications of advanced AI tools, they also harbored significant worries about AI's capacity to displace jobs, enable fraud, be misused, and even develop sentience. According to an April 2024 survey by SEO expert Mark Webster, 79% of Americans want strict AI regulation.
The tech giants leading the AI race - Google, Facebook, Amazon, Microsoft, OpenAI, and IBM - have consistently prioritized growth and market dominance over social responsibility. They have been embroiled in scandals ranging from massive data breaches to election disinformation. Can we really expect them to suddenly become responsible stakeholders of a technology as powerful as AI? Without rigorous external oversight and enforcement, corporate AI ethics guidelines will remain little more than techwashing.
AI has the potential to fundamentally reshape our economy, governance, and social fabric. From facial recognition and hiring algorithms to making decisions on managing our nation's electrical grid, AI systems are increasingly being deployed in high-stakes domains with profound impacts on people's lives. Yet, the companies developing these systems operate without transparency, accountability, and democratic oversight.
We've already seen the dangers of self-regulation in other industries. Wall Street's reckless pursuit of profits led to the 2008 financial crisis. Big Pharma's influence over drug approval has fueled the opioid epidemic. And Big Oil's rejection of climate research findings has delayed progress in addressing global warming. Now, with the exponential rise of AI, we risk repeating these mistakes on an even grander scale.
So what is the path forward? First and foremost, we need robust government regulation of AI systems, with clear standards around safety, fairness, transparency, and accountability. Last month, CAIP released the "Responsible Advanced Artificial Intelligence Act of 2024." This sweeping model legislation establishes a comprehensive framework for regulating advanced AI systems, championing public safety, and fostering technological innovation with a strong sense of ethical responsibility.
CAIP’s model legislation creates a safety net for the digital age to ensure that exciting advancements in AI are manageable by the risks they pose.
If you don’t like our legislation, call your Senator and urge them to flesh out an alternative based on one of the excellent AI safety frameworks circulating for months. Let’s get a bill introduced to carry out the licensing regime in the Hawley-Blumenthal framework, or the cybersecurity safeguards in the Romney-Reed-Moran-King framework, or Senator Hassan’s framework for fundamentally safe AI. We need action on binding safety requirements, not just more panels.
The stakes could not be higher. If we do not regulate AI, we will cede control over AI to Big Tech. Within the next decade, there will be massive cyberattacks, there will be convincing video deepfakes, there will be reckless bioengineering, there will be autonomous swarms of armed drones, and there will be AI con artists reproducing freely on the Internet and competing successfully for our money and our attention. Big Tech will wring its hands and talk about social responsibility, but they will not actually hold their unsafe products off the market unless we make that a legal requirement.
Let’s make it a legal requirement.
Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities
No one man, woman, or machine should have this much power over the future of AI
It’s time for Congress to act