CAIP work

AI safety policy solutions and thought leadership

The Center for AI Policy is developing policy solutions and advocating to policymakers to ensure AI is safe. To learn more about our work, reach out to info@aipolicy.us.

Work Categories

Preparedness: Key to Weathering Tech Disasters

Opinion

Oct 10, 2024

Preparedness: Key to Weathering Tech Disasters

Creating a plan, anticipating challenges, and executing a coordinated response saves lives and protects communities

Read more
Sam Altman’s Dangerous and Unquenchable Craving for Power

Opinion

Oct 9, 2024

Sam Altman’s Dangerous and Unquenchable Craving for Power

No one man, woman, or machine should have this much power over the future of AI

Read more
CAIP Congratulates AI Safety Advocate on Winning the 2024 Nobel Prize in Physics

Press

Oct 8, 2024

CAIP Congratulates AI Safety Advocate on Winning the 2024 Nobel Prize in Physics

The "godfather of AI" is concerned about the dangers of the technology he helped create

Read more
Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters

Policy

Oct 8, 2024

Comment on BIS Reporting Requirements for the Development of Advanced AI Models and Computing Clusters

CAIP supports these reporting requirements and urges Congress to explicitly authorize them

Read more
AI Alignment in Mitigating Risk: Frameworks for Benchmarking and Improvement

Research

Oct 7, 2024

AI Alignment in Mitigating Risk: Frameworks for Benchmarking and Improvement

Policymakers and engineers should prioritize alignment innovation as AI rapidly develops

Read more
Healthcare Privacy in the Age of AI: Guidelines and Recommendations

Research

Oct 4, 2024

Healthcare Privacy in the Age of AI: Guidelines and Recommendations

The rapid growth of AI creates areas of concern in the field of data privacy, particularly for healthcare data

Read more
Politico: Gavin Newsom and Silicon Valley Quash AI Safety Effort

Press

Oct 2, 2024

Politico: Gavin Newsom and Silicon Valley Quash AI Safety Effort

CAIP was featured in Politico's coverage of the SB 1047 veto decision

Read more
CAIP Condemns Governor Newsom’s Veto of Critical AI Regulation Bill

Press

Sep 30, 2024

CAIP Condemns Governor Newsom’s Veto of Critical AI Regulation Bill

California Governor Gavin Newsom has vetoed SB 1047, a crucial bill to ensure the responsible development and deployment of AI

Read more
Memo: Walz-Vance Debate and the Hope for Hearing AI Policy Positions

Opinion

Sep 30, 2024

Memo: Walz-Vance Debate and the Hope for Hearing AI Policy Positions

CAIP hopes that Walz and Vance will tell their fellow Americans where they stand on AI safety legislation

Read more
Ignoring AI Threats Doesn’t Make Them Go Away

Opinion

Sep 30, 2024

Ignoring AI Threats Doesn’t Make Them Go Away

The Senate Select Committee on Intelligence bipartisanly, publicly, and loudly implored the American people and the private sector to remain vigilant against election interference

Read more
Decoding AI Decision-Making: New Insights and Policy Approaches

Research

Sep 26, 2024

Decoding AI Decision-Making: New Insights and Policy Approaches

An overview of AI explainability concepts and techniques, along with recommendations for reasonable policies to mitigate risk while maximizing the benefits of these powerful technologies

Read more
AI’s Lobbying Surge and Public Safety

Opinion

Sep 26, 2024

AI’s Lobbying Surge and Public Safety

OpenAI's lobbying has expanded, and it's crowding out dialogue on safety.

Read more
There's No Middle Ground for Gov. Newsom on AI Safety

Opinion

Sep 24, 2024

There's No Middle Ground for Gov. Newsom on AI Safety

If Governor Newsom cares about AI safety, he'll sign SB 1047

Read more
The US Has Committed to Spend Far Less Than Peers on AI Safety

Opinion

Sep 23, 2024

The US Has Committed to Spend Far Less Than Peers on AI Safety

The US is punching below its weight when it comes to funding its AI Safety Institute (AISI)

Read more
Reflections on AI in the Big Apple

Opinion

Sep 20, 2024

Reflections on AI in the Big Apple

CAIP traveled to New York City to hear what local AI professionals have to say about AI's risks and rewards

Read more
OpenAI's Latest Threats Make a Mockery of Its Claims to Openness

Opinion

Sep 19, 2024

OpenAI's Latest Threats Make a Mockery of Its Claims to Openness

Who is vouching for the safety of OpenAI’s most advanced AI system?

Read more
OpenAI Unhobbles o1, Epitomizing the Relentless Pace of AI Progress

Opinion

Sep 18, 2024

OpenAI Unhobbles o1, Epitomizing the Relentless Pace of AI Progress

Engineers continue discovering techniques that boost AI performance after the main training phase

Read more
Scripps News Morning Rush Interview - September 2024

Press

Sep 17, 2024

Scripps News Morning Rush Interview - September 2024

Jason Green-Lowe joined the Morning Rush TV show to discuss AI policy

Read more
AP Poll Shows Americans’ Ongoing Skepticism of AI

Opinion

Sep 17, 2024

AP Poll Shows Americans’ Ongoing Skepticism of AI

A new polls shows once again that the American public is profoundly skeptical of AI and worried about its risks

Read more
CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

Policy

Sep 16, 2024

CAIP Comment on Managing Misuse Risk for Dual-Use Foundation Models

Response to the initial public draft of NIST's guidelines on misuse risk

Read more
Oprah’s New "Favorite Thing": Safe AI

Opinion

Sep 13, 2024

Oprah’s New "Favorite Thing": Safe AI

America’s best-beloved circles up a crew of technologists, humanists, and a law enforcer on what’s next for humanity in AI

Read more
Stoplight Report: National Campaigns are Ignoring Americans' Concerns on AI

Opinion

Sep 11, 2024

Stoplight Report: National Campaigns are Ignoring Americans' Concerns on AI

Out of 30 campaign websites reviewed, only 4 had even a single clear position on AI policy

Read more
Report on AI and Education

Research

Sep 11, 2024

Report on AI and Education

AI is spreading quickly in classrooms, offering numerous benefits but also risks

Read more
CAIP Welcomes Useful AI Bills From House SS&T Committee

Policy

Sep 11, 2024

CAIP Welcomes Useful AI Bills From House SS&T Committee

CAIP calls on House leadership to promptly bring these bills to the floor for a vote

Read more
Presidential Candidates Disappointingly Quiet on AI

Press

Sep 10, 2024

Presidential Candidates Disappointingly Quiet on AI

The voters have a right to know what their Presidential candidates will do to keep Americans safe in the age of AI

Read more
September 2024 Hill Briefing on AI and Education

Event

Sep 10, 2024

September 2024 Hill Briefing on AI and Education

Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies

Read more
Two Easy Ways for the Returning Senate to Make AI Safer

Policy

Sep 9, 2024

Two Easy Ways for the Returning Senate to Make AI Safer

With the Senate returning today from its August recess, there are two strong bills that are ready for action and that would make AI safer if passed

Read more
Memo: The Harris-Trump Debate + Safe AI

Opinion

Sep 6, 2024

Memo: The Harris-Trump Debate + Safe AI

The Center for AI Policy (CAIP) believes the 2024 Presidential candidates need to take a stand on AI safety

Read more
What South Dakota Thinks About AI: Takeaways from CAIP’s trip

Event

Sep 5, 2024

What South Dakota Thinks About AI: Takeaways from CAIP’s trip

Last week, Brian Waldrip and I traveled to South Dakota, seeking to understand how artificial intelligence (AI) is perceived and approached in the Great Plains.

Read more
TikTok Lawsuit Highlights the Growing Power of AI

Opinion

Sep 4, 2024

TikTok Lawsuit Highlights the Growing Power of AI

A 10-year-old girl accidentally hanged herself while trying to replicate a “Blackout Challenge” shown to her by TikTok’s video feed.

Read more
Governor Newsom Must Support SB 1047

Policy

Sep 3, 2024

Governor Newsom Must Support SB 1047

The Center for AI Policy (CAIP) organized and submitted the following letter to California Gavin Newsom urging him to sign SB 1047.

Read more
AI's Shenanigans in Market Economics

Opinion

Aug 30, 2024

AI's Shenanigans in Market Economics

Yet another example why we need safe and trustworthy AI models.

Read more
Somebody Should Regulate AI in Election Ads

Opinion

Aug 28, 2024

Somebody Should Regulate AI in Election Ads

Political campaigns should disclose when they use AI-generated content on radio and television.

Read more
Democratizing AI Governance

Event

Aug 24, 2024

Democratizing AI Governance

The Center for AI Policy sponsored a mobile billboard to highlight the need for democratizing AI governance in the U.S.

Read more
Democratic Platform Nails AI Strategy But Flubs AI Tactics

Opinion

Aug 21, 2024

Democratic Platform Nails AI Strategy But Flubs AI Tactics

Last Monday night (8/19/24), the Democratic Party approved its 2024 Party Platform. The platform’s general rhetoric hits all the key themes of AI safety.

Read more
You Can't Win the AI Arms Race Without Better Alignment

Opinion

Aug 19, 2024

You Can't Win the AI Arms Race Without Better Alignment

Even if we plug the holes in our porous firewalls, there’s another problem we have to solve in order to win an AI arms race: alignment.

Read more
The EU AI Act and Brussels Effect

Research

Aug 13, 2024

The EU AI Act and Brussels Effect

How will American AI firms respond to General Purpose AI requirements?

Read more
You Can’t Win the AI Arms Race Without Better Cybersecurity

Opinion

Aug 13, 2024

You Can’t Win the AI Arms Race Without Better Cybersecurity

Reflections on a trip to DEFCON 2024

Read more
AI Voice Tools Enter a New Era of Risk

Opinion

Aug 6, 2024

AI Voice Tools Enter a New Era of Risk

Human-quality speech presents a heightened risk that AI is used for fraud, misinformation, and manipulation

Read more
The Senate Passes the DEFIANCE Act

Opinion

Aug 1, 2024

The Senate Passes the DEFIANCE Act

This step forward doesn’t mean that AI is free from sexual exploitation

Read more
Cybersecurity Is Critical to Preserve American Leadership in AI

Opinion

Aug 1, 2024

Cybersecurity Is Critical to Preserve American Leadership in AI

CAIP proposes that AI companies report their cybersecurity protocols against a set of key metrics

Read more
Assessing Amazon's Call for 'Global Responsible AI Policies'

Opinion

Aug 1, 2024

Assessing Amazon's Call for 'Global Responsible AI Policies'

Why government oversight must complement corporate commitments

Read more
Senate Commerce Committee Advances Landmark Package of Bipartisan Legislation Promoting Responsible AI

Press

Jul 31, 2024

Senate Commerce Committee Advances Landmark Package of Bipartisan Legislation Promoting Responsible AI

Four bills advance to ensure c​ommonsense AI governance and innovation in the United States

Read more
Report on Autonomous Weapons and AI Policy

Research

Jul 29, 2024

Report on Autonomous Weapons and AI Policy

Autonomous weapons are here, development is ramping up, and guardrails are needed

Read more
July 2024 Webinar on AI and Autonomous Weapons

Event

Jul 29, 2024

July 2024 Webinar on AI and Autonomous Weapons

Autonomous Weapons and Human Control: Shaping AI Policy for a Secure Future

Read more
Meta Conducts Limited Safety Testing of Llama 3.1

Opinion

Jul 26, 2024

Meta Conducts Limited Safety Testing of Llama 3.1

Meta essentially ran a closed-source safety check on an open-source AI system

Read more
Researchers Find a New Covert Technique to ‘Jailbreak’ Language Models

Opinion

Jul 25, 2024

Researchers Find a New Covert Technique to ‘Jailbreak’ Language Models

This highlights challenges in anticipating malicious uses of AI 

Read more
CAIP ​Responds to Altman's AI ​Governance ​Op-Ed

Press

Jul 25, 2024

CAIP ​Responds to Altman's AI ​Governance ​Op-Ed

Calling for ​more decisive congressional action on AI safety

Read more
CAIP Proposes 2024 AI Action Plan

Policy

Jul 24, 2024

CAIP Proposes 2024 AI Action Plan

Hopes to build concensus on cybersecurity standards, emergency preparedness, and whistleblower protections

Read more
US Senators Demand AI Safety Disclosure From OpenAI

Press

Jul 23, 2024

US Senators Demand AI Safety Disclosure From OpenAI

Center for AI Policy applauds Senate action, calls for comprehensive AI safety legislation

Read more
How to Advance 'Human Flourishing' in the GOP's Approach to AI

Opinion

Jul 19, 2024

How to Advance 'Human Flourishing' in the GOP's Approach to AI

To promote human flourishing, AI tools must be safe

Read more
America Needs a Better Playbook for Emergent Technologies

Opinion

Jul 19, 2024

America Needs a Better Playbook for Emergent Technologies

Today's CrowdStrike-Microsoft outage is case in point

Read more
Zambia Copper Discovery Shows AI Accelerating AI Research

Opinion

Jul 18, 2024

Zambia Copper Discovery Shows AI Accelerating AI Research

As time goes on, the ways in which AI can enhance itself will multiply

Read more
NATO Updates AI Strategy and Includes Emphasis on AI Safety

Opinion

Jul 16, 2024

NATO Updates AI Strategy and Includes Emphasis on AI Safety

AI safety and responsibility are core themes of NATO's AI Strategy

Read more
OpenAI Employees File Complaint Alleging Violations of SEC Regulations

Opinion

Jul 15, 2024

OpenAI Employees File Complaint Alleging Violations of SEC Regulations

Regardless of the outcome, OpenAI needs stronger whistleblower protections

Read more
OpenAI's Undisclosed Security Breach

Opinion

Jul 12, 2024

OpenAI's Undisclosed Security Breach

The security breach at OpenAI should raise serious concerns among policymakers

Read more
What Boeing’s Negligence Reveals About Corporate Incentives

Opinion

Jul 9, 2024

What Boeing’s Negligence Reveals About Corporate Incentives

Corporate incentives do not ensure optimal outcomes for public safety

Read more
Statement on Google's AI Principles

Opinion

Jul 9, 2024

Statement on Google's AI Principles

Like the links on the second page of Google’s search results, these principles are something of a mixed bag

Read more
Letter to the Editor of Reason Magazine

Opinion

Jul 8, 2024

Letter to the Editor of Reason Magazine

Neil Chilson's recent critique of the Center for AI Policy's model AI safety legislation is deeply misleading

Read more
Supreme Court’s Chevron Ruling Underscores the Need for Clear Congressional Action on AI Regulation

Press

Jun 28, 2024

Supreme Court’s Chevron Ruling Underscores the Need for Clear Congressional Action on AI Regulation

Congress will need to expressly delegate the authority for a technically literate AI safety regulator

Read more
Report: Privacy Concerns & AI

Research

Jun 27, 2024

Report: Privacy Concerns & AI

AI will both intensify current privacy concerns and fundamentally restructure the privacy landscape

Read more
AI Concerns Absent From the Presidential Debate

Press

Jun 27, 2024

AI Concerns Absent From the Presidential Debate

The American people deserved to hear how their potential leaders intend to confront AI risks

Read more
June 2024 Hill Briefing on AI and Privacy

Event

Jun 26, 2024

June 2024 Hill Briefing on AI and Privacy

Protecting Privacy in the AI Era: Data, Surveillance, and Accountability

Read more
Memo: Thursday's Debate and "Scary AI"

Opinion

Jun 25, 2024

Memo: Thursday's Debate and "Scary AI"

Both Biden and Trump agree that AI is scary, but what do they plan to do about these dangers?

Read more
Hickenlooper on AI Auditing Standards

Opinion

Jun 13, 2024

Hickenlooper on AI Auditing Standards

Qualified third parties should audit AI systems and verify their compliance with federal laws and regulations

Read more
Apple Intelligence: Revolutionizing the User Experience While Failing to Confront AI's Inherent Risks

Opinion

Jun 11, 2024

Apple Intelligence: Revolutionizing the User Experience While Failing to Confront AI's Inherent Risks

We hope that at their next product launch, Apple will address AI safety

Read more
Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Opinion

Jun 4, 2024

Influential Safety Researcher Sounds Alarm on OpenAI's Failure to Take Security Seriously

Aschenbrenner argues that AI systems will improve rapidly

Read more
OpenAI Safety Team's Departure is a Fire Alarm

Opinion

May 20, 2024

OpenAI Safety Team's Departure is a Fire Alarm

The responsible thing to do is to take their warnings seriously

Read more
The Senate's AI Roadmap to Nowhere

Press

May 16, 2024

The Senate's AI Roadmap to Nowhere

The Bipartisan Senate AI Working Group has given America a roadmap for AI, but the roadmap has no destination.

Read more
What’s Missing From NIST's New Guidance on Generative AI?

Opinion

May 13, 2024

What’s Missing From NIST's New Guidance on Generative AI?

Our views on the latest AI resources from NIST

Read more
Who’s Actually Working on Safe AI at Microsoft?

Opinion

May 3, 2024

Who’s Actually Working on Safe AI at Microsoft?

Unpacking the details of Microsoft's latest announcement about expanding its responsible AI team

Read more
Should Big Tech Determine if AI Is Safe?

Opinion

May 2, 2024

Should Big Tech Determine if AI Is Safe?

Right now, only Big Tech gets to decide whether AI systems are safe

Read more
Comment on the Commerce Department's Proposed Cloud Computing Rules

Policy

May 1, 2024

Comment on the Commerce Department's Proposed Cloud Computing Rules

Recommendations for enhancing US cloud security

Read more
Memo: US Senate Gets Ready to Pile More AI Responsibilities on NIST

Press

Apr 30, 2024

Memo: US Senate Gets Ready to Pile More AI Responsibilities on NIST

CAIP's views on the new AI framework and bill

Read more
Report on AI's Workforce Impacts

Research

Apr 24, 2024

Report on AI's Workforce Impacts

Our research on AI's current and future effects on the labor market

Read more
CAIP Statement on the Release of the Future of AI Innovation Act

Press

Apr 18, 2024

CAIP Statement on the Release of the Future of AI Innovation Act

CAIP welcomes the release of the bipartisan Future of AI Innovation Act

Read more
Public Support for AI Regulation

Explainer

Apr 17, 2024

Public Support for AI Regulation

A majority of the American public supports government regulation of AI

Read more
How AI May Affect the Landscape of Social Security

Opinion

Apr 17, 2024

How AI May Affect the Landscape of Social Security

CAIP's Executive Director participated in a panel discussion hosted by the Social Security Administration

Read more
Model Legislation: Responsible Advanced AI Act

Policy

Apr 9, 2024

Model Legislation: Responsible Advanced AI Act

Our model legislation for requiring that AI be developed safely

Read more
Release: Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence‍

Press

Apr 9, 2024

Release: Model Legislation to Ensure Safer and Responsible Advanced Artificial Intelligence‍

Announcing our proposal for the "Responsible Advanced Artificial Intelligence Act of 2024"

Read more
There’s Nothing Hypothetical About Genius-Level AI

Opinion

Apr 8, 2024

There’s Nothing Hypothetical About Genius-Level AI

Genius-level AI will represent a total paradigm shift

Read more
Statement on the April 2024 US-UK AI Safety Agreement

Press

Apr 3, 2024

Statement on the April 2024 US-UK AI Safety Agreement

Memorandum of Understanding marks a new era in AI safety

Read more
NTIA Comment on Foundation Models With Open Weights

Policy

Mar 29, 2024

NTIA Comment on Foundation Models With Open Weights

Assessing the implications of open weight AI models

Read more
WWL AM (New Orleans): Are we taking the threats of AI seriously enough?

Press

Mar 28, 2024

WWL AM (New Orleans): Are we taking the threats of AI seriously enough?

March 2024 appearance on WWL First News

Read more
Overview of Emergent and Novel Behavior in AI Systems

Explainer

Mar 26, 2024

Overview of Emergent and Novel Behavior in AI Systems

Examining how increasingly advanced AI systems develop new kinds of abilities

Read more
Statement on the United Nations Passage of a Resolution to Safely Develop AI

Press

Mar 25, 2024

Statement on the United Nations Passage of a Resolution to Safely Develop AI

CAIP applauds this landmark move and aims to advance enforceable AI safety policies

Read more
Hill Op-Ed: Robocalls Are the Least of Our AI Worries

Opinion

Mar 22, 2024

Hill Op-Ed: Robocalls Are the Least of Our AI Worries

March 2024 Op-Ed in The Hill: Robocalls Are the Least of Our AI Worries

Read more
Congress Should Not Repeat Social Media Mistakes in an AI World

Opinion

Mar 19, 2024

Congress Should Not Repeat Social Media Mistakes in an AI World

Protecting individual autonomy and society from potential harms

Read more
Safety Cases: Justifying the Safety of Advanced AI Systems

Research

Mar 18, 2024

Safety Cases: Justifying the Safety of Advanced AI Systems

How AI developers could make structured rationales that their AI systems are safe

Read more
Statement on Parliament's Passage of the EU AI Act

Press

Mar 13, 2024

Statement on Parliament's Passage of the EU AI Act

The EU Parliament endorsed the EU AI Act, including common-sense safeguards on advanced general-purpose AI

Read more
Statement on President's FY25 Budget Request

Press

Mar 12, 2024

Statement on President's FY25 Budget Request

Analyzing how the proposed FY25 budget will support safe and responsible AI, and why more is needed

Read more
Statement on the 2024 State of the Union

Press

Mar 7, 2024

Statement on the 2024 State of the Union

Statement on the 2024 State of the Union

Read more
Memo: Musk vs. Altman and State of the Union

Press

Mar 5, 2024

Memo: Musk vs. Altman and State of the Union

CAIP's views on the recent lawsuit and the upcoming address

Read more
Statement on the Creation of a House AI Task Force

Press

Feb 22, 2024

Statement on the Creation of a House AI Task Force

Our statement regarding the creation of a House AI Task Force

Read more
February 2024 Maryland General Assembly Testimony

Opinion

Feb 21, 2024

February 2024 Maryland General Assembly Testimony

Our Executive Director's testimony in support of HB1062

Read more
February 2024 NAIAC Comment on AI's Workforce Impacts

Policy

Feb 21, 2024

February 2024 NAIAC Comment on AI's Workforce Impacts

February 2024 NAIAC Comment on AI's Workforce Impacts

Read more