The revised NO FAKES Act, initially designed to protect against AI-generated deepfakes, is now viewed as a potential censorship tool, causing alarm within the digital community.
The NO FAKES Act, originally formulated as a protective measure against AI deepfakes, has reportedly taken a concerning turn. Digital rights advocates claim that what started as a commendable effort to prevent unlawful digital replicas of individuals is now becoming a threat to the fundamental functioning of the internet.
The expansion of the bill has sparked worry within the tech industry. It has apparently moved beyond its original aim of shielding famous personalities from counterfeit videos, to the point of potentially establishing a comprehensive censorship framework.
Breaking Analysis: Key Information

The initial proposition of the Act was not entirely misplaced: it was intended to provide protections against AI systems generating false videos of real people without consent. However, the lawmakers, instead of formulating specific, targeted measures, have opted for what the Electronic Frontier Foundation (EFF) refers to as a “federalized image-licensing system” that extends beyond reasonable protections.
The updated bill has further intensified the original approach by mandating a new censorship infrastructure for the system, which includes not just images but also the products and services utilized to create them. The most alarming aspect of the NO FAKES Act is its requirement for almost all internet platforms to implement systems that would not only remove content on receiving takedown notices but also prevent similar content from ever being uploaded again.
What This Means for You
This requirement essentially obliges platforms to deploy content filters that have proven notoriously unreliable in other contexts. The Act’s approach targets the tools themselves, posing significant concerns for the AI sector. The modified bill wouldn’t just target harmful content; it could potentially shut down entire development platforms and software tools that could be used to create unauthorized images.
This approach is similar to trying to ban word processors because of potential defamation. The Act’s implications could mean that small UK startups delving into AI image generation might find themselves embroiled in costly legal battles based on questionable allegations even before they have a chance to establish themselves. On the other hand, tech behemoths with legal teams can better cope with such situations, potentially reinforcing their dominance.
What Happens Next
The NO FAKES Act would essentially make it mandatory for similar filtering systems to be implemented across the internet. While it does make exceptions for parody, satire, and commentary, algorithmically enforcing these distinctions has proven virtually impossible.
For smaller platforms without the resources of giants like Google, implementing such filters could be prohibitively expensive. The likely outcome is that many would simply resort to over-censoring to evade legal risk. Interestingly, one might expect major tech companies to resist such comprehensive regulation. However, many have remained notably silent, leading observers to suggest that established giants can more readily absorb compliance costs that could devastate smaller competitors.
Tucked away in the legislation is another alarming provision that could expose anonymous internet users based on mere allegations. The bill allows anyone to get a subpoena from a court clerk, without judicial review or evidence, forcing services to reveal identifying information about users accused of creating unauthorized replicas. This development has the potential to drastically change the landscape of digital rights and internet freedom. Therefore, the future actions of lawmakers and tech companies will be crucial in shaping the trajectory of this legislation.