Two new documents from the US Senate aim to steer the future of AI safety - the "Future of AI Innovation Act" and a framework introduced by Senators Romney, Reed, Moran, and King.
The documents suggest implementing safeguards and oversight mechanisms for high-risk AI systems to prevent their exploitation by foreign adversaries and bad actors.
The documents call for testing and evaluating potential AI risks, including threats to critical infrastructure, energy security, and weapon development.
While the documents are a step forward for AI safety, they do not fully satisfy the public's demand for effective regulation, as they do not create a dedicated AI regulator.
Instead of a dedicated regulator, the documents propose expanding the responsibilities of NIST (National Institute of Standards and Technology), which is counterproductive as NIST is committed to voluntary standards and not interested in a regulatory role.
The full memo can be accessed here.
Jason Green-Lowe joined the Morning Rush TV show to discuss AI policy
The voters have a right to know what their Presidential candidates will do to keep Americans safe in the age of AI
Four bills advance to ensure commonsense AI governance and innovation in the United States