In April, Meta subtly reversed its choice after taking down an Instagram post that honored older lesbian relationships in Brazil. The post was non-sexual and contained no harmful content for minors. It illustrated a historical period when lesbians had to conceal their relationships, labeling them as “roommates” or “gal pals,” thus erasing their love from public acknowledgment. Yet, Meta decided to remove the content.
Meta referred to hate speech policies. The Oversight Board later identified the Brazil case as excessive enforcement against a marginalized group, propelled by automated systems that failed to grasp context, reclaimed language, or the complete post. Following advocacy and intervention from the LGBTQ+ community, the content was restored.
This situation is now regarded as a mistake in content moderation, but policymakers should perceive it as a cautionary tale about urging platforms to monitor content instead of tackling design imperfections. States are hurrying to “protect kids online” by limiting social media access or compelling firms to eliminate vaguely defined “harmful” content. The incident in Brazil underscores the human toll of this method.
When platforms are motivated to eliminate speech rapidly, they do not enhance their ability to comprehend nuances. Social media turns into a blunt instrument, impacting those whose narratives require human understanding and compassion.
If legislators aim to safeguard children, they ought to regulate harmful design elements like endless scrolling, engagement-focused suggestions, and surveillance-driven feeds, rather than leaving it to platforms to determine acceptable narratives.
LGBTQ+ youth depend on online environments for community, information, and support, which are frequently absent or unsafe at home or school. They are also more susceptible to unsafe online interactions such as harassment, grooming, or doxxing.
In Australia, a ban on social media for those under 16 resulted in autistic youth losing crucial support networks. Recommendation systems do not recognize vulnerability but concentrate on engagement, often promoting content that encourages users to click, such as sexualized materials or predatory accounts.
Endless scrolling makes it more difficult for adolescents, especially within vulnerable populations, to disengage. Algorithmic suggestions blur the lines between teenagers and adults, while weak defaults obstruct blocking or muting functions.
Young individuals experience online dangers because platforms emphasize extracting attention over user safety. Parents are justified in their concerns and advocacy for change, but targeting content overlooks the real problem.
The greatest online hazards for children stem from automated systems promoting unwanted content, linking them with strangers, and incentivizing extended engagement.
Policymakers should devise regulations that tackle these threats. Age-appropriate design codes do not dictate speech but direct platform behavior. They mandate safer defaults, restrict behavioral profiling, enhance blocking capabilities, diminish unsolicited recommendations, and decelerate virality and compulsive usage.
The public should push for product improvements, not violate First and Fourth Amendment rights. Design codes minimize the likelihood of children being algorithmically steered into danger while seeking community.
Age-appropriate design codes present a solution by regulating platform design rather than speech. They curtail harm without transforming companies into cultural censors, concentrating on averting addiction and risk.
We do not need additional content or platform bans but fewer harmful systems. To ensure the safety of children online, particularly those most vulnerable, this case illustrates where to begin.
This article conveys the author’s perspective.
Lennon Torres is the Movement Director at the Heat Initiative and Founding Partner of The Attention Studio.