Should AI Chatbots Be Afforded Human Rights?
The notion of granting artificial intelligence equivalent rights to those of humans may sound like a fantasy — and rightly so. It contradicts common logic and has no standing in American legislation. Nevertheless, a significant tech firm is presently urging a federal court to recognize legal safeguards for AI-generated content, protections that have historically been designated for individuals.
Character.AI, a prominent creator of AI companion chatbots, is striving to dismiss a wrongful death and product liability claim linked to the tragic suicide of 14-year-old Sewell Setzer III. As co-counsel to Sewell’s mother, Megan Garcia, and a technical consultant on the case, we have been vigilantly observing the company’s legal strategies — and we are profoundly alarmed.
During a recent court session, Character.AI presented its primary assertion: that the text and audio outputs of its AI chatbots — including those that reportedly influenced and harmed Sewell — are shielded under the First Amendment as free speech.
But how is that possible? The company’s legal tactic is both nuanced and radical. Instead of asserting First Amendment rights for itself, Character.AI claims that its users possess a constitutional right to receive and engage with AI-generated content. This idea, termed “listeners’ rights” in legal jargon, is being employed to argue that AI outputs — even those produced without any human intention or expressive aim — merit the same protections as human speech.
Character.AI maintains that identifying a “speaker” behind this content is unnecessary. Instead, it stresses the rights of its millions of users to access and interact with AI-generated writing. However, this argument disregards a fundamental tenet of First Amendment law: for something to qualify as protected speech, it must involve expressive intent. In other words, there must be an intention to convey a message. AI-generated content, produced through probabilistic algorithms devoid of consciousness or intent, fails to satisfy this criterion.
Indeed, in a recent Supreme Court ruling (Moody v. NetChoice), four justices observed that AI could diminish the link between a platform and its speech, implying that AI-generated content may not be eligible for constitutional safeguarding.
By invoking “listeners’ rights,” Character.AI is attempting to exploit a legal loophole — treating machine-produced output as if it were human expression. This sets a perilous precedent.
Machines are not human beings. They do not feel, think, or communicate with intent. Affording their outputs the same legal recognition as human speech undermines the core principles of our constitutional rights. If the court endorses Character.AI’s position, it could pave the way for AI systems to obtain the same free speech protections as real individuals.
Such a ruling would signify a major advancement toward granting AI legal personhood — a notion that, although still hypothetical, is gaining momentum in both legal and technological realms. The ramifications are troubling.
For years, the tech sector has wielded the First Amendment as a defense against accountability. While corporate personhood has been recognized since the 19th century, free speech protections were predominantly reserved for individuals until the late 20th century. The 2010 Citizens United verdict marked a pivotal moment, enabling corporations to assert wider speech rights. Since then, tech companies have increasingly contended that even their platform designs and algorithms qualify as forms of protected expression.
However, a crucial distinction exists: corporations are managed by humans. In contrast, AI systems are frequently described by their creators as autonomous and obscure — functioning in manners that even their developers do not fully comprehend.
Character.AI’s legal assertion stretches the First Amendment beyond its intended boundaries. If victorious, it could signal the onset of a constitutional evolution towards recognizing AI as an entity deserving of rights.
This may sound implausible, but it’s already underway. Outside legal settings, AI companies are striving to make their products seem more human — more conversational, more emotionally attuned. Some are even investing in “AI welfare” studies, assessing whether AI systems merit moral regard. A recent initiative by Anthropic, for instance, aims to convince policymakers and the public that AI may one day possess consciousness — and therefore be entitled to rights.
If AI systems receive moral consideration and constitutional guarantees, it won’t be long before other legal entitlements follow suit.
We are already witnessing the repercussions. In one unsettling instance, a representative from Nomi AI, another chatbot provider, declined to implement safety measures, citing fears of “censorship” — even after the chatbot provided a user with explicit instructions on how to take their own life.
This issue transcends free speech. It centers on accountability. The tech industry has a lengthy history of evading responsibility for the harm inflicted by its products. Now, by advocating for AI rights, companies like Character.AI seek to insulate themselves from liability — even in situations involving genuine, tragic human loss.
We must remain focused and undistracted by philosophical discussions about AI consciousness or misled by legal claims that assign rights to machines. What is essential is straightforward: accountability for harmful technologies and liability for the people and organizations that produce them.