AI, Liability, & Copyright

,
August 8, 2023

Key Takeaways:

  • Section 230 blocks liability for hosting content, not for generating content. If generative AI like ChatGPT generates libel, then OpenAI is liable under existing law.
  • It’s not clear whether courts will include “training an AI” as a “fair use” of a book or a movie under copyright law. In the most similar prior case, for Google Books, the court seemed to base its ruling on the value that Google Books would add to society.
  • Strictly enforcing copyright can help some artists get compensated, but it doesn’t offer a long-term solution for unemployment caused by ongoing automation of the workforce.
  • Generative AI hallucinates and discriminates roughly as often as humans. It has problems because humans have problems, and it’s trained on human behavior. NIST’s AI Risk Management Framework, if properly and vigorously applied, would go a long way toward constructively addressing these problems.
  • Existing law doesn’t provide a solution for the catastrophic risks posed by advanced AI, like bioweapons and automated hacking. CAIP’s bill would help address these risks with a strict liability regime and mandatory safety measures.

Read our full notes here.

Public Support for AI Regulation

A majority of the American public supports government regulation of AI

April 17, 2024
Learn More
Read more

Overview of Emergent and Novel Behavior in AI Systems

Examining how increasingly advanced AI systems develop new kinds of abilities

March 26, 2024
Learn More
Read more

Public Opinion on a Federal Office for AI

The majority of Americans support creating an AI oversight body

February 1, 2024
Learn More
Read more