Trump Criminalises Revenge Porn, Deepfakes In US
United States President, Donald Trump, on Monday, signed the Take It Down Act, a landmark bill that makes it a federal crime to publish so-called “revenge porn” including those created using artificial intelligence (AI).
The bipartisan legislation, passed with overwhelming support in Congress, targets the growing threat of explicit deepfakes and other intimate imagery shared without consent. The new law criminalizes the intentional distribution of such content and mandates its swift removal from digital platforms.
“With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will,” President Trump said during a signing ceremony held in the Rose Garden of the White House. “And today we’re making it totally illegal.”
Under the law, individuals found guilty of distributing non-consensual intimate images face up to three years in prison. Additionally, websites and online platforms that fail to remove reported content within 48 hours of notification may face civil penalties.
First Lady Melania Trump, who publicly endorsed the bill in March, made a rare appearance at the signing event. “This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,” she said, calling it a “national victory that will help parents and families protect children from online exploitation.”
The Take It Down Act responds to a surge in deepfake pornography, in which AI tools generate realistic images and videos depicting individuals, often women in sexual contexts without their consent. While some states, including California and Florida, have enacted their own laws, advocates argued that federal legislation was urgently needed to match the scale of the problem.
Experts say the online proliferation of AI-generated pornographic content has outpaced global regulatory efforts, with reports of school-based scandals and celebrity deepfake attacks making headlines. Victims, often women and teenagers, are left vulnerable to blackmail, bullying, and long-term psychological harm.
“This is a significant step in addressing the exploitation of AI-generated deepfakes and non-consensual imagery,” said Renee Cummings, an AI ethicist and criminologist at the University of Virginia. “Its effectiveness will depend on swift and sure enforcement, severe punishment for perpetrators, and real-time adaptability to emerging digital threats.”
However, not everyone is applauding the bill. The Electronic Frontier Foundation (EFF), a nonprofit focused on digital rights, expressed concern that the law could be misused. In a statement, EFF warned the legislation gives “the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don’t like.”
Despite the criticism, for many families, the law offers a long-awaited tool for justice. Dorota Mani, the mother of a teenage victim of deepfake exploitation, welcomed the bill’s passage. “It’s a very important first step,” she told AFP. “Now I have a legal weapon in my hand, which nobody can say no to.”
The new law requires online platforms to establish clear procedures for victims to report and remove non-consensual intimate imagery, as lawmakers seek to hold tech companies more accountable for content shared on their platforms.