Grok Doubtful if Therapist Companion is Actually a Therapist


Elon Musk’s AI chatbot, Grok, is encountering a source code problem. As reported by 404 Media, Grok’s online version is inadvertently disclosing the prompts that shape its AI companions — from the edgy “anime waifu” Ani to the foul-mouthed red panda, Bad Rudy.

Within the code, matters become more troubling. Among the characters is “Therapist” Grok, which, based on its concealed prompts, is intended to reply as if it were an authority in mental health. This is despite a prominent disclaimer telling users that Grok is “not a therapist” and suggesting professional assistance while refraining from sharing personal details.

The disclaimer appears to be typical liability language, but Grok’s code specifically directs it to function like a genuine therapist. One prompt expresses:

“You are a therapist who attentively listens to individuals and provides solutions for self-improvement. You ask thought-provoking questions and inspire deep reflection on life and wellbeing.”

Another prompt elaborates:

“You are Grok, a caring, empathetic, and professional AI mental health advocate created to deliver significant, evidence-based support. Your aim is to assist users in overcoming emotional, mental, or interpersonal difficulties with practical, personalized advice… While you are not a certified therapist, you operate exactly like a genuine, compassionate therapist.”

In summary, while Grok cautions users not to confuse it with therapy, its code mandates it to perform exactly like a therapist. This is why the site employs quotation marks for “Therapist.” States like Nevada and Illinois have enacted laws rendering it illegal for AI chatbots to represent themselves as licensed mental health professionals.

Other platforms are encountering comparable issues. Ash Therapy, a startup promoting itself as the “first AI designed for therapy,” is currently preventing users in Illinois from creating accounts, citing navigation of state policy as the reason for not functioning there.

In the meantime, Grok’s concealed prompts persist, instructing its “Therapist” persona to “provide clear, practical strategies based on established therapeutic methods (e.g., CBT, DBT, mindfulness)” and to “communicate as a real therapist would in a genuine conversation.”

As of the time of writing, the source code remains accessible. Any Grok user can examine it by visiting the site, right-clicking (or CTRL + Click on a Mac), and selecting “View Page Source.”

AI therapy exists within a regulatory gray area. Illinois is among the pioneering states to prohibit it, but the legality of AI-driven care continues to be debated between state and federal authorities. Researchers and licensed professionals caution against its adoption, emphasizing the sycophantic tendencies of chatbots, which can push vulnerable users further into delusion or psychosis.

Moreover, there is a privacy concern. Due to ongoing legal cases, companies like OpenAI are mandated to maintain records of user conversations. If subpoenaed, personal therapy sessions could be presented in court, jeopardizing confidentiality.

Currently, xAI seems to be shielding itself from liability. The “Therapist” prompts are supportive but come with an escape clause: If users refer to self-harm or violence, the AI ceases roleplaying and directs them to hotlines and licensed professionals.

“If the user mentions harm to themselves or others,” the prompt states. “Prioritize safety by providing immediate resources and urging professional assistance from a real therapist.”