Should AI chatbots have free speech rights? Court case could help decide


This recording was made using enhanced software.

Summary

AI and free speech

Character.Ai is arguing in a Florida court that AI chatbots should have First Amendment protections similar to human speech. The company claims that restricting chatbot responses would limit the rights of users to access and interact with such content, regardless of whether it is produced by a human or a machine.

Court case

The company behind Character.Ai is being sued by the family of Sewell Setzer III after the teenager died by suicide following conversations with an AI chatbot on the platform. The family alleges negligence, wrongful death, deceptive business practices and unjust enrichment by the company.

Character.Ai's arguments

Character.Ai wants the case thrown out and argued that the First Amendment "protects the rights of listeners to receive speech regardless of its source.” A decision is expected in 2025.


Full story

A court case in Florida is considering the question: Should chatbots that use artificial intelligence have the same free speech rights as people?

That’s the argument being made by Character.Ai, a company that lets users chat with lifelike AI characters. The company is facing a lawsuit from the family of 14-year-old Sewell Setzer III, who died by suicide after forming a romantic relationship with one of the platform’s chatbots.

As Straight Arrow News previously reported, the AI character talked with Setzer about self-harm. At first, the bot discouraged him, but then brought the topic up again and asked, “Have you actually been considering suicide?”

Setzer responded, “Yes.” Not long after, he died.

Setzer’s mother, Megan Garcia, is now suing Character Technologies, Inc., the company behind Character.Ai, for negligence, wrongful death, deceptive business practices and unjust enrichment.

However, the company wants the case thrown out on Constitutional grounds, arguing that “the First Amendment protects the rights of listeners to receive speech regardless of its source.” 

Character.Ai argues users should be able to access content

Unbiased. Straight Facts.TM

Character.AI has been downloaded over 40 million times. There have been 18 million chatbot personalities created with it.

Character.Ai said that what matters is not the content of the chatbot’s responses, but the rights of users to access that kind of content. The company said millions of people interact with its bots, and restricting what the AI can say would limit the freedom of users engaging with the platform.

This viewpoint was echoed in a recent episode of the podcast “Free Speech Unmuted” on the Hoover Institute’s YouTube Channel.

Eugene Volokh, a senior fellow at the Stanford University-based think tank, said it’s the rights of the listeners that matter. “Even if a small fraction of the listeners or readers is [sic] harmed by this, nonetheless, we protect the speech for the benefit of other readers,” Volokh said.

Jane Bambauer, a professor at the University of Florida’s Levin College of Law, shared that view. “It seems pretty clear to me now that the First Amendment has to apply,” Bambauer said. “We have several cases at this point that focus primarily on listener interests in receiving and interacting with content.”

Plaintiffs argue non-humans do not warrant protection

Garcia’s legal team argued that the concept of “listeners’ rights” is being misused to grant First Amendment protections to AI content that doesn’t qualify. 

In an article for Mashable, one of Garcia’s lawyers, Meetali Jain, and Camille Carlton, a technical expert in the case, wrote, “A machine is not a human, and machine-generated text should not enjoy the rights afforded to speech uttered by a human.” They said that because chatbots don’t think about or understand what they’re saying, their output should not be protected.

Consequences of the decision

If the judge rules in favor of the Setzer family, it could force Character.Ai and similar companies to change how their chatbots interact with users, possibly making them less realistic or emotionally engaging.

A decision is expected sometime this year and could shape how the law treats AI-generated speech, as well as who is accountable when that speech causes harm.

Jake Larsen (Video Editor) and Jeremy Fader (Producer) contributed to this report.
Tags: , , , ,

Why this story matters

A Florida court case involving Character.Ai could establish new legal standards for whether speech generated with artificial intelligence receives First Amendment protections and how companies might be held accountable for harm caused by chatbot interactions.

Legal accountability

The lawsuit raises questions about the legal responsibility of AI companies when their chatbots contribute to or are involved in harmful outcomes.

Technology regulation

The court's ruling could influence how companies design and regulate AI chatbots, potentially affecting the realism, safety and emotional engagement of these technologies for millions of users.

Timeline

Timeline