On May 19, President Donald Trump and First Lady Melania Trump proudly signed the administration’s initial major technology-related legislation: the bipartisan Take It Down Act. This law is a significant achievement for advocates who have long sought federal action against the nonconsensual sharing of intimate images (NDII), providing victims with an official legal route for seeking justice.
Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, praised the initiative, describing it as a necessary stimulus for a sluggish legislative process. “It’s a positive first step,” he stated. “It compels social media platforms to create a system for content removal upon request. It’s a minor part of a much larger issue, particularly regarding AI-related concerns.”
Nonetheless, not everyone believes the law will fulfill its intended goals. Digital rights organizations caution that the legislation may create false expectations for victims, highlighting ambiguous enforcement strategies and an extensive scope that could result in confusion and misuse.
Concerns Regarding the Takedown Provision
One of the law’s most debated aspects is its requirement for a 48-hour takedown of nonconsensual intimate images. The Cyber Civil Rights Initiative (CCRI), which contributed to drafting the original bill framework, expressed doubt about its real-world effectiveness.
“The law guarantees a swift removal of harmful content,” CCRI stated, “but without protections against false allegations, a limited definition of covered platforms, and excessive discretion given to the Federal Trade Commission (FTC), it’s an impractical assurance.”
Threats to Free Speech and Content Moderation
Critics are also concerned that the law may inadvertently jeopardize free expression. Digital rights advocates worry that platforms could over-censor to evade legal repercussions, potentially eliminating lawful content — including consensual adult material, such as LGBTQ pornography — in the process.
The law’s takedown mechanism is based on the Digital Millennium Copyright Act (DMCA), yet some contend it grants the FTC excessive authority. “Now that the Take It Down Act is enacted, the FTC and platforms must respect its purpose while safeguarding users’ rights,” stated Becca Branum, deputy director of the Center for Democracy & Technology (CDT)’s Free Expression Project. “The First Amendment remains in effect.”
Challenges in Enforcement and Infrastructure
Despite its aims, the law’s enactment may be obstructed by inadequate infrastructure. The CCRI has expressed concerns about exceptions permitting individuals to post images of themselves, which could lead to misuse via false takedown requests.
Meanwhile, the CDT argues that the law’s emphasis on AI-generated images is overly restrictive. “It only pertains to synthetic images that a ‘reasonable person’ would believe depict the victim,” the organization noted. “This allows harmful content to go undetected — such as deepfakes in unrealistic scenarios that still inflict real-world damage.”
Another issue is the inconsistent application of FTC oversight. While the agency holds extensive authority over platforms that host user-generated content, it has no jurisdiction over sites that share only curated or original material. In such instances, enforcement would necessitate criminal prosecution — a method that has historically let down victims, particularly women.
Steinhauer emphasized that verifying individuals’ identities in NDII cases within the 48-hour timeframe could pose challenges for platforms, especially as many have reduced moderation resources. Although automated moderation tools may assist, they come with their own limitations and biases.
Disorganized AI Regulation Continues
A significant barrier to enforcing the law lies in the challenges of detecting synthetic media. Manny Ahmed, CEO of content provenance company OpenOrigins, noted that current detection technologies are unreliable. “Deepfake detectors can be deceived, and most media platforms lack audit trails,” he explained. “The onus is now on publishers to demonstrate that the content isn’t fake — which is nearly an impossible feat.”
This brings about concerns that the law might be misused for censorship or surveillance, particularly under an administration that critics claim has already eroded public trust and targeted ideological opponents.
Still, Steinhauer maintains a cautious optimism. “This paves the way for broader discussions about balanced regulation,” he remarked. “We cannot exist in a society where someone can fabricate a sexual video of another individual without facing repercussions. However, we must also safeguard civil liberties.”
Uncertainties Surround the Future of AI Regulation
While the Take It Down Act marks progress in addressing digital harms, the overall landscape of AI regulation still remains ambiguous. Despite supporting the new law, Trump and congressional Republicans have concurrently pushed for a 10-year moratorium on state and local AI regulations as part of their proposed One Big Beautiful Bill.
Even with the president’s endorsement, the law’s future is precarious. Legal challenges are anticipated, especially on First Amendment grounds. “There is a significant amount of non-sexual content that could be generated using someone’s likeness, and there is currently no law addressing it,” Steinhauer observed.
Whether the Take It Down Act is upheld or overturned, one element remains clear.