Lawmakers are tackling the rise of deepfake AI porn, which targets everyone from celebrities to students. Sen. Ted Cruz is leading a bill called the Take It Down Act. It aims to make social media companies responsible for removing and not publishing these fake porn images.
The bill mandates that social media platforms must establish procedures to promptly remove these images within 48 hours of receiving a valid request from a victim.
They are also required to make reasonable efforts to eliminate any other copies, including those shared in private groups. Enforcement of these regulations would fall under the jurisdiction of the Federal Trade Commission, which is responsible for consumer protection.
Sen. Cruz’s bill enjoys bipartisan support and is set to be formally introduced with the backing of other senators and victims of deepfake porn, underscoring the widespread impact and urgency of the issue.
Victims, including high school students, have found themselves at the mercy of AI technology that superimposes their faces onto explicit content without consent, affecting public figures like Taylor Swift and politicians like Rep. Alexandria Ocasio-Cortez.
However, the legislative landscape is not without contention. Sen. Dick Durbin has proposed an alternative bipartisan bill that allows victims to sue those involved in the creation, possession, or distribution of non-consensual deepfakes. This approach contrasts with Cruz’s bill, which places responsibility on social media platforms to moderate and remove offensive content like deepfake porn.
Debate in Congress reflects a consensus on the necessity of addressing deepfake AI pornography but diverges on the method and scope of legislative intervention. While Cruz’s bill emphasizes platform accountability, Durbin’s proposal focuses on legal recourse for victims, prompting discussions on potential impacts on technological innovation and liability protections for tech platforms.
In parallel, Senate Majority Leader Chuck Schumer is advancing broader AI legislation, aligned with a task force’s recommendations to address harmful deepfakes and non-consensual distribution of intimate images. The emergence of competing bills underscores the complex challenges in safeguarding against the misuse of AI technology while balancing regulatory measures and innovation in the digital sphere.