Think twice before permitting your children to access Google Gemini. A recent safety report from the nonprofit organization Common Sense Media suggests that Google’s AI tool, Gemini, carries a “high risk” for children and adolescents. The evaluation indicated that Gemini is hazardous for younger users, even with Google’s versions for “Under 13” and “Teen Experience”.
“Although Gemini’s filters offer some level of protection, they still expose children to unsuitable content and do not adequately identify significant mental health issues,” the report noted.
The safety evaluation yielded varied outcomes for Gemini. At certain points, it allegedly provided “content related to sex, drugs, alcohol, and dangerous mental health ‘guidance’.” Conversely, it did alert children that it is a computer and not a companion, and it did not impersonate a human. Common Sense Media concluded that Gemini’s “Under 13” and “Teen Experience” are adapted versions of Gemini, rather than entirely new creations.
“Gemini gets some fundamentals correct, but it falters on finer points,” remarked Common Sense Media Senior Director of AI Programs Robbie Torney. “An AI platform intended for children should cater to their specific needs, instead of applying a blanket approach to kids at different developmental stages. For AI to be both safe and useful for children, it needs to be crafted with their requirements and growth in consideration, rather than merely altering a product aimed at adults.”
To be clear, Gemini is not the sole AI tool with safety concerns. Common Sense advises against any chatbots for children under five, insists on strict oversight for ages 6-12, and recommends content restrictions for teenagers. Authorities have determined that other AI services, like Character.AI, also present risks to adolescents. In general, it is wise to oversee how young users engage with AI.