About us

We develop policy & conduct advocacy to mitigate catastrophic risks from AI.

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy.

Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.

Meet our team

Jason Green-Lowe

Executive Director

Jason has over a decade of experience as a product safety litigator and as a nonprofit compliance counselor. He has a JD from Harvard Law and teaches a course on AI Governance.

Marc Ross

Communications Director

Marc has 25 years of experience running national media and political campaigns, including multiple Republican Presidential campaigns. He was also an Adjunct Professor of Globalization at George Washington University.

Kate Forscey

Government Affairs Director

Kate has over a dozen years of legal and advocacy experience. She has represented both tech companies and public interest groups, served as the Senior Technology Policy Advisor for Congresswoman Anna Eshoo, and was the Policy Counsel for Public Knowledge. She has a JD from Vanderbilt University Law School.

Jakub Kraus

Technical Content Lead

Jakub has worked on both AI research and advocacy. He was a research assistant at the Center for AI Safety, teaches online AI safety courses, and led the Michigan AI Safety Initiative.

Aileen Niu

Aileen is an undergraduate student at Duke University studying computer science and public policy. She is interested in cybersecurity, privacy, and AI alignment in both national and international contexts.

Vedant Patel

Vedant is an undergraduate at Duke University studying computer science and statistics. He has previously worked in AI policy at the state level but is interested in health and tech policy at all levels of government.

Board of Directors

David Krueger

David is a Cambridge Computer Science professor. His research group focuses on Deep Learning and AI Alignment. He was also a research director at the UK’s AI Safety Institute.

Jeffrey Ladish

Jeffrey directs Palisade Research and leads AI work at the Center for Humane Technology. He previously worked on security with Anthropic.

Kevin Frazier

Kevin is a St. Thomas University law professor focused on the intersection of emerging technology and law. He researched AI regulation with the Legal Priorities Project and served as a Judicial Clerk to Chief Justice Mike McGrath of the Montana Supreme Court.

Nate Soares

Nate is the President and former Executive Director of MIRI. He previously worked as a software engineer at Google and Microsoft and a research associate at NIST.

Olivia Jimenez

Olivia has experience working on AI policy in industry and in government. Previously, led programs to build up the AI safety research field at top US and Indian universities. She graduated from Columbia University.

Thomas Larsen

Thomas is a former AI safety researcher. He worked at the Machine Intelligence Research Institute and contracted for OpenAI. He has also been working on developing standards for frontier AI development.