The leading artificial intelligence models may have faced a hurdle: Gen Alpha. A recent research study, accessible at the ACM Digital Library, revealed that four significant AI models (GPT-4, Claude, Gemini, and Llama 3) had difficulty fully comprehending slang from Gen Alpha, which includes individuals born between 2010 and 2024. The research outlines the intricacies of Gen Alpha slang, frequently stemming from online environments like gaming. A single phrase can convey entirely different meanings. For instance, “Fr fr let him cook” indicates support for someone, whereas “Let him cook lmaoo” implies mockery. Such nuanced variations in language can be challenging to detect, particularly as young individuals often utilize coded language to mask their genuine intentions. Evidently, LLMs struggle with this, especially in recognizing “masked harassment,” which presents a challenge for AI-driven moderation systems.
“The results emphasize a pressing need for enhanced AI safety systems to better safeguard young users, particularly considering Gen Alpha’s inclination to refrain from seeking assistance due to the perceived inability of adults to understand their digital landscape,” the study noted.
In fairness to AI models, comprehending the slang of young people—specifically Gen Alpha, who have been raised in digital environments—is a challenge for humans as well. The research also explored parents’ grasp of slang, which stood at 68 percent for basic understanding, similar to the highest-performing LLM, Claude. Nonetheless, LLMs appeared to possess a slight edge over parents in recognizing context and safety concerns in language, although all groups performed inadequately. Only Gen Alpha itself consistently understood the slang, its context, and associated risks.
The TL;DR of the study indicates that AI cannot consistently grasp Gen Alpha, which may result in insufficient content moderation. This aligns with other research suggesting that AI struggles with intricate comprehension.
“This investigation provides the first systematic assessment of how AI safety systems understand Gen Alpha’s distinct digital communication styles,” the study concluded. “By directly involving Gen Alpha users in the research process, we’ve quantified significant comprehension gaps between these young users and their guardians—both human and AI.”