NBA superstar LeBron James stands out as one of the initial prominent figures to confront the unauthorized appropriation of his image in AI-generated materials. His legal representatives recently dispatched a cease-and-desist notice to FlickUp, the organization responsible for the AI application Interlink AI.
As reported by 404 Media, FlickUp communicated with its Discord community concerning the legal proceedings in late June. Interlink AI, hosted on that server, enabled users to produce AI-generated videos featuring high-profile NBA athletes like James, Stephen Curry, and Nikola Jokić. While numerous videos were innocuous, others were unsettling, including a depiction of James with a pregnant abdomen.
A particularly viral video made using Interlink AI depicted an AI-generated Sean “Diddy” Combs attacking Curry within a prison environment, with James apathetically observing in the background. This clip is said to have amassed over 6.2 million views on Instagram.
404 Media verified with FlickUp founder Jason Stacks that James’ legal team had indeed sent the cease-and-desist letter. Within a half-hour of receiving it, Stacks made the decision to “eliminate all realistic figures from Interlink AI’s platform.” He also shared a video regarding the matter, captioned: “I’m so f**ked.”
LeBron James is part of an expanding roster of celebrities whose images have been exploited without approval in alarming AI-generated works. Taylor Swift has fallen victim to deepfake pornography, whereas Scarlett Johansson and Steve Harvey have criticized the unauthorized use of their likenesses and have advocated for legislation addressing it. James is among the pioneers to initiate formal legal proceedings against a company facilitating such media through AI technologies.
Numerous bills are advancing in Congress aimed at tackling nonconsensual AI-generated content. The recently enacted Take It Down Act criminalizes the creation or threat of distributing intimate images without consent, encompassing deepfakes and AI-originated pornography. Two further initiatives—the NO FAKES Act of 2025 and the Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025—have been proposed.
The NO FAKES Act’s objective is to thwart unauthorized AI duplication of an individual’s voice, while the latter aims to safeguard original creations and promote transparency concerning AI-generated media.