Over the last year or so, we have all seen an uptick in coverage of regulating artificial intelligence (AI).
From Harvard Business Review to The Today Show, regulating AI has been discussed, analyzed, and reported on by numerous media outlets.
The reporting continues to be: “Is it good, is it bad, what to do, when to do it?”
Meanwhile, AI systems have already infiltrated much of our lives. The news we receive, the movies we watch, the music we hear, the clothes we buy, the jobs we take, the friends we make, and even the people we marry are increasingly determined by recommendation algorithms.
And by now, you have probably experimented with the latest batch of AI systems: chatbot assistants and synthetic media. Whether it is GPT-3.5 (or GPT-4 Turbo) or Microsoft Copilot, or the many other AI tools out there, there is a cognitive dissonance happening in real-time. The AI technology available now is sometimes pretty fun and productive, but the models' accelerating sophistication should also concern Congress.
We know the stories. College students use it to write their essays, and that’s not all. From marketing gurus auto-filling emails to federal lawmakers writing legislation, Americans are using AI to enhance their work and play. And in some cases, that’s lovely. It’s a great tool.
But we need to be careful, because AI will only continue to grow more capable from here. If the last wave was recommender systems and the current wave is chatbots, then the coming wave appears to be AI agents that can use tools and act more autonomously. Within this decade, a future wave—which would be more accurate to call a tsunami—could bring AI systems that outcompete and replace almost all human brainpower in the workforce.
One serious concern with these waves of AI is not necessarily the rise of Skynet or killer robots, but simply the ongoing reality of artificial intelligence seeping into our way of life and of thinking. Rather than bioweapons and cyber attacks, think about extraordinarily addictive content feeds and user experiences, available to anyone who taps appropriately on their screen (or headset).
Getting lazy and relaxing into a habit where you abdicate autonomy to your computer or smartphone may be easier and enjoyable, but it can subtly undermine your ability to own your actions and decisions.Â
And as short-term pleasures become increasingly on demand, what will consumers spend their time doing? Will people seek imperfect humans who challenge their views, or will they opt to speak with acquiescent chatbots? Will they marry and raise a family, or binge on fleeting amusements? And who will have the disposition to carefully steer humanity’s future in a positive direction, through thinking, working and, dare we say, voting? It’s hard to tell, because the unprecedented convenience of dopamine might become exceptionally hard to refuse.
This threat model is not Terminator; it’s the humans on the spaceship in WALL-E. For the uninitiated, WALL-E depicts a society where the humans have essentially become living lumps who vegetate on a spaceship, and don’t really do anything but watch screens because tech does everything for you, and are okay with that.
Today, these kinds of sci-fi are becoming increasingly possible. That is why our friends in Washington are trying to figure out what to do about AI.Â
Presently, dozens of bills in the House and Senate are trying to address the AI issue from every angle. But they are all disconnected and distinct, making it hard to finish anything, especially in a historic and highly competitive election year.
Meanwhile, the European Union has already made great strides towards AI regulation by moving through guidelines and getting closer to actual rules.Â
However, the EU policies are not enough to address the AI technology American companies are creating. The onus is on Congress. Congress has acknowledged this fact with the creation of task forces and the introduction of legislation, but the current crop of elected officials on Capitol Hill is moving at a pace that suggests they are just kicking the proverbial AI policy can down the road. At some point, America needs enacted legislation to keep our home companies in line as they continue to innovate.
America has already missed the mark on social media, and lawmakers acknowledge this mistake. There is a reason that lawmakers are holding hearings with the CEOs of Big Tech (think: Meta, X, Snap, and Discord) to address the societal damage and work to craft legislation to fix how we as a society screwed up society so badly. And with AI, America cannot afford to let Congress mess up again.Â
At the end of the day, we are still holding studies on how social media and screentime are harming individual and societal wellness. Meanwhile, our opportunity to avoid social media mistakes is slipping through our fingers. Congress needs to advance legislation that protects our autonomy and ensures AI is used responsibly as a tool benefiting all of us without impairing our own lives—especially those of our children, who will inherit the AI world that’s upon us. It is incumbent on Congress not to kick the proverbial AI policy can down the road, but to have the tenacity to enact real, enforceable legislation to mitigate the potential harms of AI this legislative session.
AI is a helpful tool and cool to use, but Congress needs to step up and advance legislation that walks the fine line between preserving AI’s benefits and preserving our agency.Â
Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities
No one man, woman, or machine should have this much power over the future of AI
It’s time for Congress to act