The NO FAKES act has changed, and it's worse

by mileson 6/24/2025, 5:34 AMwith 118 comments

by rootlocuson 6/24/2025, 8:58 AM

> The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

Sounds like the kind of system small companies can't implement and large companies won't care to implement.

by stodor89on 6/24/2025, 8:36 AM

15 years ago that would've been outrageous. But at this point they're just kicking a dead horse.

by bsenftneron 6/24/2025, 11:05 AM

Doesn't all this assume that any such media is being "social media" shared? The language of this strikes me as moot within private communities. Could this be the unrealized "thing we want" and that is the killing of social media?

by t0bia_son 6/24/2025, 7:47 PM

Why not ban lying and set up ministry of truth?

Attempts to regulate lying are just cover for push certain narratives to favor political opinions.

by Melatonicon 6/24/2025, 4:01 PM

I'm not sure I agree with the author at all on this - they make a bunch of bold claims (comparing it to the DMCA which is totally different) without much proof.

Counterpoint:

https://www.recordingacademy.com/advocacy/news/no-fakes-act-...

by harvey9on 6/24/2025, 1:49 PM

So all the images need to go through replica filters but ai makes it trivially easy to make substantially different images from a single prompt so now we need an ai to infer the 'meaning' of an image. It all sounds like great news for chip makers and power generators.

by daft_pinkon 6/24/2025, 4:27 PM

I would really like a simple english explanation of what this does without the lobbying/agitator catastrophizing.

Are they saying that this act allows companies to watermark their content to prevent other companies from generating secondary content on their content?

How exactly does this act work?

by ProllyInfamouson 6/24/2025, 4:43 PM

Tennessee enacts several genAI laws July 1st; interestingly (perhaps due to state legislator misunderstanding of terminology?), these generic bans are sooo sweeping that they effectively ban owning any GPU.

>Oy': you got a license for that tensor core, mate?!

----

But hey at least we're outlawing marijuana beyond what the Federal 2018 Farm Bill authorizes, nationally /s

----

Our neighboring Friend down in Texas, the goodly-astute Gov-nuh, was wise to veto similar legislations in his own jurisdiction (due to perceived unconstitutionality / legal challenges).

by mschuster91on 6/24/2025, 7:34 AM

> The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

You already need point a) to be in place to comply with EU laws and directives (DSA, anti-terrorism [1]) anyway, and I think the UK has anti-terrorism laws with similar wording, and the US with CSAM laws.

Point b) is required if you operate in Germany, there have been a number of court rulings that platforms have to take down repetitive uploads of banned content [2].

Point c) is something that makes sense, it's time to crack down hard on "nudifiers" and similar apps.

Point d) is the one I have the most issues with, although that's nothing new either, unmasking users via a barely fleshed out subpoena or dragnet orders has been a thing for many many years now.

This thing impacts gatekeepers, so not your small mom-and-pop startup but billion dollar companies. They can afford to hire proper moderation staff to handle such complaints, they just don't want to because it impacts their bottom line - at the cost of everyone affected by AI slop.

[1] https://eucrim.eu/news/rules-on-removing-terrorist-content-o...

[2] https://www.lto.de/recht/nachrichten/n/vizr6424-bgh-renate-k...

by adolphon 6/24/2025, 2:50 PM

It seems like one approach is to oppose such a law, but that is playing defense and will eventually lose.

Another approach would be to develop OSS that fulfilled the basic requirements in a non-tyrannical manner that supports people who create things. There was the example of Cliff Stoll on this site just yesterday. It is objectively wrong for someone to mistreat his creative work in that way.

What would httpd for content inspection look like? Plagiarism detection? Geo-encoded fair use?

by ls612on 6/24/2025, 8:21 AM

Unfortunately this is the inevitable outcome of information and computation (and therefore control) becoming cheap. Liberal political systems can no longer survive in equilibrium. The 21st century will be a story either of ruling with an iron fist or being crushed beneath one :(