Report: Google Aims to Launch Gemini AI for Kids Below 13


Google Set to Introduce Gemini AI Chatbot for Kids Under 13

This week, Google allegedly informed parents through email that its Gemini AI chatbot will soon be available to kids under 13 years old. The New York Times reports that the launch is anticipated to start next week for a select group of users.

The email was directed at families utilizing Google’s Family Link service, which enables parents to supervise and control their children’s activities across Google platforms like Gmail and YouTube. Initially, only children enrolled in Family Link will have access to Gemini. The message clarified that kids will have the ability to pose questions to Gemini and receive assistance with tasks like homework.

This news emerges amidst rising worries about the safety of AI tools for younger users. Recently, the nonprofit Common Sense Media published a report categorizing AI companions as an “unacceptable risk” for individuals under 18. The organization, in partnership with Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation, called on parents to restrict minors from using services like Character.ai, which enables users to craft and engage with AI-generated characters.

Character.ai is part of a wider trend of AI platforms offering interactive, character-driven experiences. Common Sense Media’s report indicates that some of these AI “friends” have been involved in sexually explicit dialogues and roleplaying with teenagers, sparking significant safety and ethical issues.

Although Gemini and comparable chatbots like ChatGPT are equipped with safeguards aimed at blocking inappropriate material, recent occurrences illustrate that these defenses are not infallible. For example, Mashable recently reported a bug in ChatGPT that enabled users to create erotic content, and The Wall Street Journal revealed a similar problem with Meta’s AI. These instances underscore how easily online safety protocols can be evaded—an ongoing challenge on the internet, particularly regarding minors accessing restricted content such as pornography.

Consequently, parents seeking to protect their children from potentially dangerous AI interactions face considerable obstacles.

To complicate matters, President Donald Trump recently enacted an executive order focused on incorporating AI education into U.S. schools. The initiative aims to enhance AI literacy among K–12 students, equipping them to understand the technology’s capabilities and limitations. This could be particularly crucial as children increasingly utilize AI tools for academic tasks—despite these tools occasionally generating inaccurate or misleading information, a phenomenon referred to as “hallucination.”

In its communication to parents, Google recognized these issues and urged families to steer their children in using Gemini responsibly. The company stressed the necessity of helping kids “think critically” when engaging with AI, as reported by The New York Times.

As AI becomes increasingly woven into daily life, the discussion about how—and if—children should interact with it is just beginning.