technologyneutral
Deepfakes: New Law Protects Victims from AI Harms
USATuesday, May 20, 2025
The law passed with strong bipartisan support, with only two dissenting votes in the House. Over 100 organizations, including major tech companies like Meta, TikTok, and Google, supported the legislation. The First Lady also played a role in pushing for the law, hosting a teenage victim at a joint session of Congress.
The story of Elliston Berry, a Texas high school student, highlights the need for this law. A classmate altered a photo of her using AI to make it look like she was nude and shared it on Snapchat. Berry and other teens have faced similar harassment. The Take It Down Act will provide legal protections for victims like Berry, ensuring that those who share such images face consequences.
Tech platforms have already taken some steps to address this issue. Some have forms for users to request the removal of explicit images, and others have partnered with non-profits to facilitate the removal of such images. However, bad actors often seek out platforms that don't take action, underscoring the need for legal accountability.
The Center for Countering Digital Hate praised the law, stating that it compels social media platforms to protect women from intimate and invasive breaches of their rights. Public Citizen's Ilana Beller also emphasized the importance of sending a clear signal that such behavior is unacceptable.
Actions
flag content