About The Center for AI Policy

CAIP develops policy & conducts advocacy to mitigate catastrophic risks from AI.

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy.

Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.

Center for AI Policy (CAIP) Team 2024

Meet our team

Jason Green-Lowe

Executive Director

Jason has over a decade of experience as a product safety litigator and as a nonprofit compliance counselor. He has a JD from Harvard Law and teaches a course on AI Governance.

Marc Ross

Communications Director

Marc has 25 years of experience running national media and political campaigns, including multiple Republican Presidential campaigns. He was also an Adjunct Professor of Globalization at George Washington University.

Kate Forscey

Government Affairs Director

Kate has over a dozen years of legal and advocacy experience. She has represented both tech companies and public interest groups, served as the Senior Technology Policy Advisor for Congresswoman Anna Eshoo, and was the Policy Counsel for Public Knowledge. She has a JD from Vanderbilt University Law School.

Brian Waldrip

Government Relations Director

Brian has over twenty years of advocacy and government affairs experience. Previously, he worked as Legislative Director for a senior member of Congress and as Professional Staff for the House Committee on Transportation and Infrastructure. Outside of government service, he has advocated on issues pertaining to higher education, science & technology, transportation, and others.

Claudia Wilson

Senior Policy Analyst

Claudia has a Masters in Public Policy from Yale, where she focused on AI policy, disinformation, and geopolitics. Prior to that, she spent several years working across the public and private sector practices at the Boston Consulting Group. Claudia has also researched Chinese fossil fuel policies for the Organisation for Economic Cooperation and Development (OECD) and interned at the Australian Chamber of Commerce in Shanghai.

Mark Reddish

External Affairs Director

Mark is an attorney with more than a decade of experience in advocacy and policy development related to telecommunications and public safety.

Jakub Kraus

Technical Content Lead

Jakub has worked on both AI research and advocacy. He was a research assistant at the Center for AI Safety, taught online AI safety courses, and led the Michigan AI Safety Initiative. He holds a BS in data science and mathematics from the University of Michigan.

Tristan Williams

Research Fellow

Tristan has worked as a research assistant at both the Center for AI Safety and Conjecture, working at the intersection between research on best governance practices for AI and advocacy.

Makeda Heman-Ackah

Program Officer

Makeda has experience working as a software engineer on customer facing websites supporting millions of users globally and over a decade of experience in IT. As an IT Instructor, she traveled training developers to use Big Data technologies utilized for AI development.  She has degrees in Computer and Information Science and Political Science and experience working for U.S. Public Interest Group, which is a federation of advocacy groups.

Board of Directors

David Krueger

David is a Cambridge Computer Science professor. His research group focuses on Deep Learning and AI Alignment. He was also a research director at the UK’s AI Safety Institute.

Jeffrey Ladish

Jeffrey directs Palisade Research and leads AI work at the Center for Humane Technology. He previously worked on security with Anthropic.

Nate Soares

Nate is the President and former Executive Director of MIRI. He previously worked as a software engineer at Google and Microsoft and a research associate at NIST.

Olivia Jimenez

Olivia has experience working on AI policy in industry and in government. Previously, she led programs to build up the AI safety research field at top US and Indian universities. She graduated from Columbia University.

Thomas Larsen

Thomas is a former AI safety researcher. He worked at the Machine Intelligence Research Institute and contracted for OpenAI. He has also been working on developing standards for frontier AI development.