About us

CAIP develops policy & conduct advocacy to mitigate catastrophic risks from AI.

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy.

Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.

Meet our team

Jason Green-Lowe

Executive Director

Jason has over a decade of experience as a product safety litigator and as a nonprofit compliance counselor. He has a JD from Harvard Law and teaches a course on AI Governance.

Marc Ross

Communications Director

Marc has 25 years of experience running national media and political campaigns, including multiple Republican Presidential campaigns. He was also an Adjunct Professor of Globalization at George Washington University.

Kate Forscey

Government Affairs Director

Kate has over a dozen years of legal and advocacy experience. She has represented both tech companies and public interest groups, served as the Senior Technology Policy Advisor for Congresswoman Anna Eshoo, and was the Policy Counsel for Public Knowledge. She has a JD from Vanderbilt University Law School.

Brian Waldrip

Government Relations Director

Brian has over twenty years of advocacy and government affairs experience. Previously, he worked as Legislative Director for a senior member of Congress and as Professional Staff for the House Committee on Transportation and Infrastructure. Outside of government service, he has advocated on issues pertaining to higher education, science & technology, transportation, and others.

Claudia Wilson

Senior Policy Analyst

Claudia has a Masters in Public Policy from Yale, where she focused on AI policy, disinformation, and geopolitics. Prior to that, she spent several years working across the public and private sector practices at the Boston Consulting Group. Claudia has also researched Chinese fossil fuel policies for the Organisation for Economic Cooperation and Development (OECD) and interned at the Australian Chamber of Commerce in Shanghai.

Jakub Kraus

Technical Content Lead

Jakub has worked on both AI research and advocacy. He was a research assistant at the Center for AI Safety, taught online AI safety courses, and led the Michigan AI Safety Initiative. He holds a BS in data science and mathematics from the University of Michigan.

Tristan Williams

Research Fellow

Tristan has worked as a research assistant at both the Center for AI Safety and Conjecture, working at the intersection between research on best governance practices for AI and advocacy.

Aileen Niu

Intern

Aileen is an undergraduate student at Duke University studying computer science and public policy. She is interested in cybersecurity, privacy, and AI alignment in both national and international contexts.

Vedant Patel

Intern

Vedant is an undergraduate at Duke University studying computer science and statistics. He has previously worked in AI policy at the state level but is interested in health and tech policy at all levels of government.

Board of Directors

David Krueger

David is a Cambridge Computer Science professor. His research group focuses on Deep Learning and AI Alignment. He was also a research director at the UK’s AI Safety Institute.

Jeffrey Ladish

Jeffrey directs Palisade Research and leads AI work at the Center for Humane Technology. He previously worked on security with Anthropic.

Kevin Frazier

Kevin is a St. Thomas University law professor focused on the intersection of emerging technology and law. He researched AI regulation with the Legal Priorities Project and served as a Judicial Clerk to Chief Justice Mike McGrath of the Montana Supreme Court.

Nate Soares

Nate is the President and former Executive Director of MIRI. He previously worked as a software engineer at Google and Microsoft and a research associate at NIST.

Olivia Jimenez

Olivia has experience working on AI policy in industry and in government. Previously, led programs to build up the AI safety research field at top US and Indian universities. She graduated from Columbia University.

Thomas Larsen

Thomas is a former AI safety researcher. He worked at the Machine Intelligence Research Institute and contracted for OpenAI. He has also been working on developing standards for frontier AI development.