EFF: Updates
Weakening Speech Protections Will Punish All of Us—Not Just Meta
Recently, a California Superior Court jury found that Meta and YouTube harmed a user through some of the features they offered. And a New Mexico jury concluded that Meta deceived young users into thinking its platforms were safe from predation.
It’s clear that many people are frustrated by big tech companies and perhaps Meta in particular. We too have been highly critical of them and have pushed for years to end their harmful corporate surveillance. So it’s not surprising that a jury felt like Mark Zuckerberg and his company, along with YouTube, needed to be held accountable.
While it would be easy to claim that these cases set a legal precedent that should make social media companies fearful, that’s not exactly true. And that’s actually a good thing for the internet and its users.
These jury trials were just an early step in a long road through the court system. These cases will now go up on appeal, where the courts’ rulings about the First Amendment and immunity under Section 230 will likely get reconsidered.
As we have argued many times before, the First Amendment protects both user speech and the choices platforms make on how to deliver that speech (in the same way it protects newspapers' right to curate their editorial pages as they see fit). Features on social media sites that are designed to connect users cannot be separated from the users’ speech, which is why courts have repeatedly held that these features are indeed protected.
So while it may be tempting to celebrate these juries’ decisions as a "win" against big tech, in fact the ramifications of lowering First Amendment and immunity standards on other speakers—ones that members of the public actually like, and do not want to punish—are bad. We can’t create less protective speech rules for Meta and Google alone just because we want them held accountable for something else.
As we have often said, much of the anger against these companies arises from people rightfully feeling that these companies harvest and exploit their data, and monetize their lives for crass economic reasons. We therefore continue to urge Congress to pass a comprehensive national privacy law with a private right of action to address these core concerns.
A Baseless Copyright Claim Against a Web Host—and Why It Failed
Copyright law is supposed to encourage creativity. Too often, it’s used to extract payouts from others.
Higbee & Associates, a law firm known for sending copyright demand letters to website owners, targeted May First Movement Technology, accusing it of infringing a photograph owned by Agence France-Presse (AFP). The claim was baseless. May First didn’t post the photo. It didn’t even own the website where the photo appeared.
May First is a nonprofit membership organization that provides web hosting and technical infrastructure to social justice groups around the world. The allegedly infringing image was posted years ago by one of May First’s members, a human rights group based in Mexico. When May First learned about the copyright complaint, it ensured that the group removed the image.
That should have been the end of it. Instead, the firm demanded payment.
So EFF stepped in as May First’s counsel and explained why AFP and Higbee had no valid claim. After receiving our response, Higbee backed down.
This outcome is a reminder that targets of copyright demands often have strong defenses—especially when someone else posted the material.
Hosting Content Isn’t the Same as Publishing ItCopyright law treats those who create or control content differently from those who simply provide the tools or infrastructure for others to communicate.
In this case, May First provided hosting services but didn’t post the photo. Courts have long recognized that service providers aren’t direct infringers when they merely store material at the direction of users. In those cases, service providers lack “volitional conduct”—the intentional act of copying or distributing the work.
Copyright law also recognizes that intermediaries can’t realistically police everything users upload. That’s why legal protections like the Digital Millennium Copyright Act safe harbors exist. Even outside those safe harbors, courts still shield service providers from liability when they promptly respond to notices.
May First did exactly what the law expects: it notified its member, and the image came down.
A Claim That Should Have Been Withdrawn Much SoonerThe troubling part of this story isn’t just that a demand was sent. It’s that Higbee and AFP continued to demand money and threaten litigation after May First explained that it was merely a hosting provider and had the image removed.
In other words, the claim was built on shaky legal ground from the start. Once May First explained its role, Higbee should have withdrawn its demand. Individuals and small nonprofits shouldn’t need lawyers just to stop aggressive copyright shakedowns.
Statutory Damages Fuel Copyright AbuseThis isn’t an isolated case—it’s a predictable result of copyright law’s statutory damages regime.
Statutory damages can reach $150,000 per work, regardless of actual harm. That enormous leverage incentivizes firms like Higbee to send mass demand letters seeking quick settlements. Even meritless claims can generate revenue when recipients are too afraid, confused, or resource-constrained to fight back.
This hits community organizations, independent publishers, and small service providers that don’t have in-house legal teams especially hard. Faced with the threat of ruinous statutory damages, many just pay what is demanded.
That’s not how copyright law should work.
Know Your RightsIf you receive a copyright demand based on material someone else posted, don’t assume you’re liable.
You may have defenses based on:
- Your role as a hosting or service provider
- Lack of volitional conduct
- Prompt removal of the material after notice
- The statute of limitations
- The copyright owner’s failure to timely register the work
- The absence of actual damages
Every situation is different, but the key point is this: a demand letter is not the same as a valid legal claim.
Standing Up to Copyright TrollsMay First stood its ground, and Higbee abandoned its demand after we explained the law.
But the bigger problem remains. Copyright’s statutory damages framework enables aggressive enforcement tactics that targets the wrong parties, and chills lawful online activity.
Until lawmakers fix these structural incentives, organizations and individuals will keep facing pressure to pay up—even when they’ve done nothing wrong.
If you get one of these demand letters, remember: you may have more rights than it suggests.
- EFF Letter to Higbee and Associates, March 4, 2026
Print Blocking Won't Work - Permission to Print Part 2
This is the second post in a series on 3D print blocking, for the first entry check out: Print Blocking is Anti-Consumer - Permission to Print Part 1
Legislators across the U.S. are proposing laws to force “blueprint blockers” on 3D printers sold in their states. This mandated censorware is doomed to fail for its intended purpose, but will still manage to hurt the professional and hobbyist communities relying on these tools.
3D printers are commonly used to repair belongings, decorate homes, print figurines, and so much more. It’s not just hobbyists; 3D printers are also used professionally for parts prototyping and fixturing, small-batch manufacturing, and workspace organization. In rare cases, they’ve also been used to print parts needed for firearm assembly.
Many states have already banned manufacturing firearms using computer controlled machine tools, which are called “Computer Numerical Control or CNC machines,” and 3D printers without a license. Recently proposed laws seek to impose technical limitations onto 3D printers (and in some cases, CNC machines) in the hope of enforcing this prohibition.
This is a terrible idea; these mandates will be onerous to implement and will lock printer users into vendor software, impose one-time and ongoing costs on both printer vendors and users, and lay the foundation for a 3D-print censorship platform to be used in other jurisdictions. We dive more into these issues in the first part of this series.
On a pragmatic level, however, these state mandates are just wishful thinking. Below, we dive into how 3D printing works, why these laws won’t deter the printing of firearms, and how regular lawful use will be caught in the proposed dragnet.
How 3D Printers WorkTo understand the impact of this proposed legislation, we need to know a bit about how 3D printers work. The most common printers work similarly to a computer-controlled hot glue gun on a motion platform; they follow basic commands to maintain temperature, extrude (push) plastic through a nozzle, and move a platform. These motions together build up layers to make a final “print.” Modern 3D printers often offer more features like Wi-Fi connectivity or camera monitoring, but fundamentally they are very simple machines.
The basic instructions used by most 3D printers are called Geometric Code, or G-Code, which specify very basic motions such as “move from position A to position B while extruding plastic.” The list of commands that will eventually print up a part are transferred to the printer in a text file thousands-to-millions of lines long. The printer dutifully follows these instructions with no overall idea of what it is printing.
While it is possible to write G-Code by hand for either a CNC machine or a 3D printer, the vast majority is generated by computer aided manufacturing (CAM) software, often called a “slicer” in 3D printing since it divides a 3D model into many 2D slices then generates motion instructions.
This same general process applies to CNC machines which use G-Code instructions to guide a metal removal tool. CNC machines have been included in previous prohibitions on firearm manufacturing and file distribution and are also targeted in some of these bills.
There are other types of 3D printers such as those that print concrete, resin, metal, chocolate and other materials using slightly different methods. All of these would be subject to the proposed requirements regardless of how unlikely doing harm with a gun made out of chocolate would be.
Simple rectangular 3D model for test fit
Part of a 173490 line long G-Code file produced by slicer for simple rectangular model.
Part of a 173,490 line long G-Code file for a simple rectangular part.
How is Firearm Detection Supposed to Work?Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software. These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale.
Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support. Owners of existing noncompliant 3D printers in regulated states will be unable to resell their printers on the secondary market legally.
What Will Actually Happen?While the proposed laws allow for scanning to happen on either the printer itself or in the slicer software, the reality is more complicated.
The computers inside many 3D printers have very limited computational and storage ability; it will be impossible for the printer’s computer to render the G-Code into a 3D model to compare with the database of prohibited files. Thus the only way to achieve this through the machine would be to upload all printer files to a cloud comparison tool, creating new delays, errors, and unacceptable invasions of privacy.
Many vendors will instead choose to permanently link their printers to a specific slicer that implements firearm detection. This requires cryptographic signing of G-Code to ensure only authorized prints are completed, and will lock 3D printer owners into the slicer chosen by their printer vendor.
Regardless of the specifics of their implementation, these algorithms will interfere with 3D printers' ability to print other parts without actually stopping manufacture of guns. It takes very little skill for a user to make slight design tweaks to either a model or G-Code to evade detection. One can also design incomplete or heavily adorned models which can be made functional with some post-print alterations. While this would be pioneered by skilled users—like the ones who designed today’s 3D printed guns—once the design and instructions are out there anyone able to print a gun today will be able to follow suit.
Firearm part identification features also impose costs onto 3D printer manufacturers, and hence their end consumers. 3D printer manufacturers must develop or license these costly algorithms and continuously maintain and update both the algorithm and the database of firearm models. Older printers that cannot comply will not be able to be resold in states where they are banned, creating additional E-waste.
While those wishing to create guns will still be able to do so, people printing other functional parts will likely be caught up in these algorithms, particularly for things like film props, kids’ toys, or decorative models, which often closely resemble real firearms or firearm components.
What Are The Impacts of These Changes?Technological restrictions on manufacturing tools’ abilities are harmful for many reasons. EFF is particularly concerned with this regulation locking a 3D printer to proprietary vendor software. Vendors will be able to use this mandate to support only in-house materials, locking users into future purchases. Vendor slicer software is often based on out-of-date, open source software, and forcing users to use that software deprives them of new features or even use of their printer altogether if the vendor goes out of business. At worst, some of these bill will make it a misdemeanor to fix those problems and gain full control of your printer.
File-scanning frameworks required by this regulation will lay the foundation for future privacy and freedom intrusions. This requirement could be co-opted to scan prints for copyright violations and be abused similar to DMCA takedowns, or to suppress models considered obscene by a patchwork of definitions. What if you were unable to print a repair part because the vendor asserted the model was in violation of their trademark? What if your print was considered obscene?
Regardless of your position on current prohibitions on firearms, we should all fight back against this effort to force technological restrictions on 3D printers, and legislators must similarly abandon the idea. These laws impose real costs and potential harms among lawful users, lay the groundwork for future censorship, and simply won’t deter firearm printing.
Print Blocking is Anti-Consumer - Permission to Print Part 1
This is the first post in a series on 3D print blocking, for the next entry check out Print Blocking Won't Work - Permission to Print Part 2
When legislators give companies an excuse to write untouchable code, it’s a disaster for everyone. This time, 3D printers are in the crosshairs across a growing number of states. Even if you’ve never used one, you’ve benefited from the open commons these devices have created—which is now under threat.
This isn’t the first time we’ve gone to bat for 3D printing. These devices come in many forms and can construct nearly any shape with a variety of materials. This has made them absolutely crucial for anything from life-saving medical equipment, to little Iron Man helmets for cats, to everyday repairs. For decades these devices have been a proven engine for innovation, while democratizing a sliver of manufacturing for hobbyists, artists, and researchers around the world.
For us all to continue benefiting from this grassroots creativity, we need to guard against the type of corporate centralization that has undermined so much of the promise of the digital era. Unfortunately some state legislators are looking to repeat old mistakes by demanding printer vendors install an enshittification switch.
In the U.S, three states have recently proposed that commercial 3D-printer manufacturers must ensure their printers only work with their software, and are responsible for checking each print for forbidden shapes—for now, any shape vendors consider too gun-like. The 2D equivalent of these “print-blocking” algorithms would be demanding HP prevent you from printing any harmful messages or recipes. Worse still, some bills can introduce criminal penalties for anyone who bypasses this censorware, or for anyone simply reselling their old printer without these restrictions.
If this sounds like Digital Rights Management (DRM) to you, you’ve been paying attention. This is exactly the sort of regulation that creates a headache and privacy risk for law-abiding users, is a gift for would-be monopolists, and can be totally bypassed by the lawbreakers actually being targeted by the proposals.
Ghosting Innovation“Print blocking” is currently coming for an unpopular target: ghost guns. These are privately made firearms (PMFs) that are typically harder to trace and can bypass other gun regulations. Contrary to what the proposed regulations suggest, these guns are often not printed at home, but purchased online as mass-produced build-it-yourself kits and accessories.
Scaling production with consumer 3D printers is expensive, error-prone, and relatively slow. Successfully making a working firearm with just a printer still requires some technical know-how, even as 3D printers improve beyond some of these limitations. That said, many have concerns about unlicensed firearm production and sales. Which is exactly why these practices are already illegal in many states, including all of the states proposing print blocking.
Mandating algorithmic print-blocking software on 3D printers and CNC machines is just wishful thinking. People illegally printing ghost guns and accessories today will have no qualms with undetectably breaking another law to bypass censoring algorithms. That’s if they even need to—the cat and mouse game of detecting gun-like prints might be doomed from the start, as we dive into in this companion post.
Meanwhile, the overwhelming majority of 3D-printer users do not print guns. Punishing innovators, researchers, and hobbyists because of a handful of outlaws is bad enough, but this proposal does it by also subjecting everyone to the anticompetitive and anticonsumer whims of device manufacturers.
Can’t make the DRM thing workWe’ve been railing against Digital Rights Management (DRM) since the DMCA made it a federal crime to bypass code restricting your use of copyrighted content. The DRM distinction has since been weaponized by manufacturers to gain greater leverage over their customers and enforce anti-competitive practices.
The same enshittification playbook applies to algorithmic print blockers.
Restricting devices to manufacturer-provided software is an old tactic from the DRM playbook, and is one that puts you in a precarious spot where you need to bend to the whims of the manufacturer. Only Windows 11 supported? You need a new PC. Tools are cloud-based? You need a solid connection. The company shutters? You now own an expensive paperweight—which used to make paperweights.
It also means useful open source alternatives which fit your needs better than the main vendor’s tools are off the table. The 3D-printer community got a taste of this recently, as manufacturer Bambu Labs pushed out restrictive firmware updates complicating the use of open source software like OrcaSlicer. The community blowback forced some accommodations for these alternatives to remain viable. Under the worst of these laws, such accommodations, and other workarounds, would be outlawed with criminal penalties.
People are right to be worried about vendor lock-in, beyond needing the right tool for the job. Making you reliant on their service allows companies to gradually sour the deal. Sometimes this happens visibly, with rising subscription fees, new paywalls, or planned obsolescence. It can also be more covert, like collecting and selling more of your data, or cutting costs by neglecting security and bug fixes.
With expensive hardware on the line, they can get away with anything that won’t make you pay through the nose to switch brands.
Indirectly, this sort of print-blocking mandate is a gift to incumbent businesses making these printers. It raises the upfront and ongoing costs associated with smaller companies selling a 3D printer, including those producing new or specialized machines. The result is fewer and more generic options from a shrinking number of major incumbents for any customer not interested in building their own 3D printer.
Reaching the Melting PointIt’s already clear these bills will be bad for anyone who currently uses a 3D printer, and having alternative software criminalized is particularly devastating for open source contributors. These impacts to manufacturers and consumers culminate into a major blow to the entire ecosystem of innovation we have benefited from for decades.
But this is just the beginning.
Once the infrastructure for print blocking is in place, it can be broadened. This isn’t a block of a very specific and static design, like how some copiers block reproductions of currency. Banning a category of design based on its function is a moving target, requiring a constantly expanding blacklist. Nothing in this legislation restricts those updates to firearm-related designs. Rather, if we let proposals like this pass, we open the door to the database of forbidden shapes for other powerful interests.
Intellectual property is a clear expansion risk. This could look like Nintendo blocking a Pikachu toy, John Deere blocking a replacement part, or even patent trolls forcing the hand of hardware companies. Repressive regimes, here or abroad, could likewise block the printing of "extreme" and “obscene” symbols, or tools of resistance like popular anti-ICE community whistles.
Finally, even the most sympathetic targets of algorithmic censorship will result in false positives—blocking 3D-printer users’ lawful expression. This is something proven again and again in online moderation. Whether by mistake or by design, a platform that has you locked in has little incentive to offer remedies to this censorship. And these new incentives for companies to surveil each print can also impose a substantial chilling effect on what the user chooses to create.
While 3D printers aren’t in most households, this form of regulation would set a dangerous precedent. Government mandating on-device censors which are maintained by corporate algorithms is bad. It won’t work. It consolidates corporate power. It criminalizes and blocks the grassroots innovation and empowerment which has defined the 3D-printer community. We need to roundly reject these onerous restraints on creation.
Google and Amazon: Acknowledged Risks, And Ignored Responsibilities
In late 2024, we urged Google and Amazon to honor their human rights commitments, to be more transparent with the public, and to take meaningful action to address the risks posed by Project Nimbus, their cloud computing contract that includes Israel’s Ministry of Defense and the Israeli Security Agency. Since then, a stream of additional reporting has reinforced that our concerns were well-founded. Yet despite mounting evidence of serious risk, both companies have refused to take action.
Amazon has completely ignored our original and follow-up letters. Google, meanwhile, has repeatedly promised to respond to our questions. Yet more than a year and a half later, we have seen no meaningful action by either company. Neither approach is acceptable given the human rights commitments these companies have made.
Additionally, Microsoft required a public leak before it felt compelled enough to look into and find that its client, the Israeli government, was indeed misusing its services in ways that violated Microsoft’s public commitments to human rights. This should have given both Google and Amazon an additional reason to take a close look and let the public know what they find, but nothing of the sort materialized.
In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.
Google: Known Risks, No Meaningful ActionGoogle’s own internal assessments warned of the risks associated with Project Nimbus even before the contract was signed. Major news outlets have reported that Google provides the Israeli government with advanced cloud and AI services under Project Nimbus, including large-scale data storage, image and video analysis, and AI model development tools. These capabilities are exceptionally powerful, highly adaptable, and well suited for surveillance and military applications.
Despite those warnings, and the multiple reports since then about human rights abuses by the very portions of the Israeli government that uses Google’s and Amazon’s services, the companies continue to operate business as usual. It seems that they have taken the position that they do not need to change course or even publicly explain themselves unless the media or other external organizations present definitive proof that their tools have been used in specific violations of international human rights or humanitarian law. While that conclusive public evidence has not yet emerged for all the companies, the risks are obvious, and they are aware of them. Instead of conducting robust, transparent human rights due diligence, Amazon and Google are continually choosing to look the other way.
Google’s own internal assessments undermine its public posture. According to reporting, Google’s lawyers and policy staff warned that Google Cloud services could be linked to the facilitation of human rights abuses. In the same report, Google employees also raised concerns that the company’s cloud and AI tools could be used for surveillance or other militarized purposes, which seems very likely given the Israeli government’s long-standing reliance on advanced data-driven systems to control and monitor Palestinians.
Google has publicly claimed that Project Nimbus is “not directed at highly sensitive, classified, or military workloads” and is governed by its standard Acceptable Use Policies. Yet reporting has revealed conflicting representations about the contract’s terms, including indications that the Israeli government may be permitted to use any services offered in Google’s cloud catalog for any purpose. Google has declined to publicly resolve these contradictions, and its lack of transparency is problematic. The gap between what Google says publicly and what it knows internally should alarm anyone who hopes to take the company’s human rights commitments seriously.
Google’s and Amazon’s AI Principles Require Proactive ActionEven after being revised last year, Google’s AI Principles continue to commit the company to responsible development and deployment of its technologies, including implementing appropriate human oversight, due diligence, and safeguards to mitigate harmful outcomes and align with widely accepted principles of international law and human rights. While the updated principles no longer explicitly commit Google to avoiding entire categories of harmful use, they still require the company to assess foreseeable risks, employ rigorous monitoring and mitigation measures, and act responsibly throughout the full lifecycle of AI development and deployment.
Amazon has similarly committed to responsible AI practices through its Responsible AI framework for AWS services. The company states that it aims to integrate responsible AI considerations across the full lifecycle of AI design, development and operation, emphasizing safeguards such as fairness, explainability, privacy and security, safety, transparency, and governance. Amazon also says its AI services are designed with mechanisms for monitoring, and risk mitigation to help prevent harmful outputs or misuse and to enable responsible deployment across a range of use cases.
Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice.
Here, the risks are neither speculative nor remote. They are foreseeable, well-documented, and exacerbated by the context in which Project Nimbus operates, which is an ongoing military campaign marked by widespread civilian harm and credible allegations of grave human rights violations including genocide. In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.
Modern cloud and AI systems are designed to be flexible, customizable, and deployable at scale, often beyond the vendor’s direct visibility. That reality is precisely why human rights due diligence must be proactive. Waiting for a leaked document or whistleblower account demonstrating direct misuse, as occurred in Microsoft’s case, means waiting until harm has already been done.
Microsoft’s Experience Should Have Been Warning EnoughAs noted above, the recent revelations about Microsoft’s technologies being misused in violation of Microsoft’s commitments by the Israeli military illustrate the dangers of this wait-and-see approach. Google and Amazon should not need a similar incident to recognize what is at stake. The demonstrated misuse of comparable technologies, combined with Google’s and Amazon’s own knowledge of the risks associated with Project Nimbus, should already be sufficient to trigger action.
The appropriate response is to act responsibly and proactively.
Google and Amazon should immediately:
- Conduct and publish an independent human rights impact assessment of Project Nimbus.
- Disclose how they evaluate, monitor, and enforce compliance with their AI Principles in high-risk government contracts, including and especially in Project Nimbus.
- Commit to suspending or restricting services where there is a credible risk of serious human rights harm, even if definitive proof of misuse has not yet emerged.
Google and Amazon publicly emphasize their commitment to responsible AI and respect for human rights. Those commitments are meaningless if they apply only once harm is undeniable and irreversible. In conflict settings, especially where secrecy and information asymmetry are the norm, companies must act on credible risk, not perfect evidence.
Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice, and one that carries real consequences for people whose lives are already at risk.
EFF’s Submission to the UN OHCHR on Protection of Human Rights Defenders in the Digital Age
Governments around the world are adopting new laws and policies aimed at addressing online harms, including laws intended to curb cybercrime and disinformation, and ostensibly protect user safety. While these efforts are often framed as necessary responses to legitimate concerns, they are increasingly being used in ways that restrict fundamental rights.
In a recent submission to the United Nations Office of the High Commissioner for Human Rights, we highlighted how these evolving regulatory approaches are affecting human rights defenders (HRDs) and the broader digital environment in which they operate.
Threats to Human Rights DefendersAcross multiple regions, cybercrime and national security laws are being applied to prosecute lawful expression, restrict access to information, and expand state surveillance. In some cases, these measures are implemented without adequate judicial oversight or clear safeguards, raising concerns about their compatibility with international human rights standards.
Regulatory developments in one jurisdiction are also influencing approaches elsewhere. The UK’s Online Safety Act, for example, has contributed to the global diffusion of “duty of care” frameworks. In other contexts, similar models have been adopted with fewer protections, including provisions that criminalize broadly defined categories of speech or require user identification, increasing risks for those engaged in the defense of human rights.
At the same time, disruptions to internet access—including shutdowns, throttling, and geo-blocking—continue to affect the ability of HRDs to communicate, document abuses, and access support networks. These measures can have significant implications not only for freedom of expression, but also for personal safety, particularly in situations of conflict or political unrest.
The expanded use of digital surveillance technologies further compounds these risks. Spyware and biometric monitoring systems have been deployed against activists and journalists, in some cases across national borders. These practices result in intimidation, detention, and other forms of retaliation.
The practices of social media platforms can also put human rights defenders—and their speech—at risk. Content moderation systems that rely on broadly defined policies, automated enforcement, and limited transparency can result in the removal or suppression of speech, including documentation of human rights violations. Inconsistent enforcement across languages and regions, as well as insufficient avenues for redress, disproportionately affects HRDs and marginalized communities.
Putting Human Rights FirstThese trends underscore the importance of ensuring that regulatory and corporate responses to online harms are grounded in human rights principles. This includes adopting clear and narrowly tailored legal frameworks, ensuring independent oversight, and providing effective safeguards for privacy, expression, and association.
It also requires meaningful engagement with civil society. Human rights defenders bring essential expertise on the local and contextual impacts of digital policies, and their participation is critical to developing effective and rights-respecting approaches.
As digital technologies continue to shape civic space, protecting the individuals and communities who rely on them to advance human rights remains an urgent priority.
You can read our full submission here.
Digital Hopes, Real Power: From Revolution to Regulation
This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings.
From Russia—where wartime censorship and more stringent platform controls have choked dissenting voices—to Nigeria, with its aggressive takedown orders turning social media into political battlegrounds, and to Turkey, where sweeping “disinformation” laws have made platforms heavily policed spaces, freedom of expression online is under attack. Per Freedom House’s 2023 Freedom on the Net Report, 66% of internet users live where political or social sites are blocked, and 78% are in countries where people have been arrested for online posts. New social media regulations have emerged in dozens of countries in the past year alone.
The online landscape looks markedly different than it did fifteen years ago. Back then, social media was still new and largely free from legal restrictions: platforms moderated content in response to user reports, governments rarely targeted them directly, and blocks (when they happened) were temporary, with censorship mostly focused on whole websites that VPNs or proxies could easily bypass. The internet was far from free, but governments’ crude tactics left space for circumvention.
Those early restrictions, as crude as they were, marked the start of a rapid evolution in online censorship. Governments like Thailand, which blocked thousands of YouTube videos in 2007 over critical content, and Turkey, which demanded takedowns from YouTube before blocking the site entirely, tested legal and technical pressures to mute dissent and force platforms’ compliance. By 2011, governments weren't just reacting—they had learned to pressure platforms into becoming instruments of state censorship, shifting their playbooks from blunt blocks to sophisticated systems of control that simple VPNs could no longer reliably bypass. Governments across the region were watching closely, and by the time the 2011 uprisings began, they were prepared to respond.
Looking Back
After learning that a Facebook page—We Are All Khaled Said, honoring a young man killed by police brutality—sparked Egypt’s street protests, Western media hailed online platforms as engines of democracy. Revolution co-creator Wael Ghonim told a journalist: “This revolution started on Facebook.” That claim was debated and contested for years; critically, Facebook had suspended the page two months earlier over pseudonyms violating its real-name policy, restoring it only after advocates intervened.
Once the protests moved to the streets, Egypt’s government—alert to social media’s power—quickly blocked Facebook and Twitter, then enacted a near-total shutdown (more on that in part 4 of this series). As history shows, the measures didn’t stop the revolution, and Egyptian president Hosni Mubarak stepped down. For a brief moment, freedom appeared to be on the horizon. Unfortunately, that moment was short-lived.
Egypt’s Digital Dystopia
Just as the Egyptian military government quashed revolution in the streets, they also shut down online civic space. Today, Egypt’s internet ranks low on markers of internet freedom. The military government that has ruled Egypt since 2013 has imprisoned human rights defenders and enacted laws—including 2015’s Counter-terrorism Law and 2018’s Cybercrime Law—that grant the state broad authority to suppress speech and prosecute offenders.
The 2018 law demonstrates the ease with which cybercrime laws can be abused. Article 7 of the law allows for websites that constitute “a threat to national security” or to the “national economy” to be blocked. The Association of Freedom of Thought and Expression (AFTE) has criticized the loose definition of “national security” contained within the law, as “everything related to the independence, stability, security, unity and territorial integrity of the homeland.” Notably, individuals can also be penalized—and sentenced to up to six months imprisonment—for accessing banned websites.
Articles 25, which prohibits the use of technology to “infringe on any family principles or values in Egyptian society,” and 26, which prohibits the dissemination of material that “violates public morals,” have been used in recent years to prosecute young people who use social media in ways in which the government disapproves. Many of those prosecuted have been young women; for instance, belly dancer Sama Al Masry was sentenced to three years in prison and fined 300,000 Egyptian pounds under Article 26.
Beyond Egypt: Regional Trends
Egypt’s trajectory reflects a wider regional and global pattern. In the years following the uprisings, governments moved quickly to formalize legal authority over digital space, often under the banner of combating cybercrime, terrorism, or “false information.” These laws often contain vaguely worded provisions criminalizing “misuse of social media” or “harming national unity,” giving authorities wide discretion to prosecute speech.
In Qatar and Bahrain, a social media post can result in up to five years in jail. In 2018, prominent Bahraini human rights defender Nabeel Rajab was convicted of “spreading false rumours in time of war”, “insulting public authorities”, and “insulting a foreign country” for tweets he posted about the killing of civilians in Yemen and sentenced to five years imprisonment.
Two years later, Qatar amended its penal code by setting criminal penalties for spreading “fake news.” Article 136 (bis) sets criminal penalties for broadcasting, publishing, or republishing “rumors or statements or false or malicious news or sensational propaganda, inside or outside the state, whenever it is intended to harm national interests or incite public opinion or disturb the social or public order of the state” and sets a punishment of a maximum of five years in prison, and/or 100,000 Qatari riyals. The penalty is doubled if the crime is committed in wartime.
Now, as war has once again reached the region, these laws are being put to the test. Bahraini authorities have arrested at least 100 people in relation to protests or expression related to the war, while Qatar has arrested more than 300 people on charges of spreading “misleading information.”
And in the UAE, at least 35 people—most or all of whom are foreign nationals—have been arrested and “accused of spreading misleading and fabricated content online that could harm national defence efforts and fuel public panic,” according to the Times of India. The arrests fall under the UAE’s 2022 Federal Decree Law No. 34 on Combating Rumours and Cybercrimes which—says Human Rights Watch—is, along with the country’s Penal Code, “used to silence dissidents, journalists, activists, and anyone the authorities perceived to be critical of the government, its policies, or its representatives.”
From Regional Practice to Global Pattern
Today roughly four out of five countries worldwide have enacted cybercrime legislation, a dramatic expansion over the past decade, with many governments adopting or revising such laws in the years following the Arab uprisings.
Outside the region, other nations have repurposed these laws to police speech. In Nigeria, journalists have been detained under the Cybercrime Act, with dozens of prosecutions documented since 2015. Bangladesh’s Digital Security Act has been used in thousands of cases—including hundreds against journalists—while in Uganda, authorities have prosecuted political critics under computer misuse laws for social media posts.
Cybercrime laws are only one piece of a broader toolkit that governments now deploy to control digital spaces. Over the past decade, authorities have introduced sweeping “disinformation” laws, platform liability rules, age verification laws, and data localization requirements that force companies to store data domestically or appoint legal representatives within national jurisdictions. These measures give governments leverage over global technology firms, enabling them to demand faster content removals, obtain user data, or threaten steep fines and throttling if platforms fail to comply. Rather than relying solely on blunt instruments like blocking entire websites, states increasingly govern speech through layered regulatory systems that pressure platforms to police users on the state’s behalf.
The platforms too have changed. The same social media companies that were once championed as tools of democratic mobilization now operate in more constrained environments—and often act as willing participants in repressing speech. Facing financial penalties and the prospect of being blocked entirely, many companies expanded compliance with takedown requests after 2011, as can be seen in the companies’ own transparency reports. They later invested heavily in automated technologies that remove vast quantities of content before it is ever publicly available.
Rights groups around the world, including EFF, have warned that these dynamics disproportionately impact historically marginalized and vulnerable groups, as well as journalists and other human rights defenders. Research by the Palestinian digital rights organization 7amleh and reporting by Human Rights Watch have documented how content moderation policies, government pressure, and opaque enforcement mechanisms increasingly converge—leaving activists, journalists, and human rights defenders caught between state censorship and platform governance.
The New Architecture of Repression
Looking back now, it’s clear that, fifteen years ago, governments were caught off guard. They crudely blocked platforms, shut down networks, and scrambled to contain movements they did not fully understand. But in the years since, states have systematically adapted, transforming what were once reactive measures into durable systems of control.
Today’s controls are embedded in law, outsourced to platforms, and justified through the language of security, safety, and order. Cybercrime statutes, disinformation frameworks, and platform regulations form a layered architecture that allows states to shape online expression at scale while maintaining a veneer of legality. In this system, repression is often procedural, bureaucratic, and continuous.
The question is no longer whether the internet can enable dissent, but whether it can still sustain it under these conditions.
This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.
Welcome, Daily Show Viewers! Learn More About EFF and Privacy's Defender
The Electronic Frontier Foundation is the leading nonprofit defending civil liberties in the digital world. EFF’s work to protect your rights on the internet is supported by over 30,000 members who have joined our mission by donating just this year.
For over 35 years, our lawyers, activists, and technologists have been thinking about the next big thing in tech before anyone else—whether that’s age verification, AI, or Palantir. Whatever causes you fight for, you rely on the internet to do so. And EFF protects the infrastructure of rebellion.
To learn more about our work, follow EFF on social media and subscribe to EFF's EFFector newsletter below to learn about the ways the internet and online rights are changing and what that means for you. And join EFF to support our fight—because if you use technology, this fight is yours.
Privacy's Defender: My Thirty Year Fight Against Digital Surveillance, by Cindy CohnIn Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press), EFF Executive Director Cindy Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.
"Let's Sue the Government" T-ShirtSometimes our supporters call EFF a merch store with a law firm attached because our stickers, hoodies and shirts are so well known. Our "Let's Sue the Government" shirt tells people: When your rights are at risk, you don’t stay quiet.
EFF's HistoryIn early 1990, the U.S. Secret Service conducted raids tracking the distribution of a document illegally copied from a telecom company’s computer; one of those targeted was an Austin, TX publisher named Steve Jackson, whose computers were seized but later returned without any charges filed. Jackson’s business had suffered, and he discovered that the government had read and deleted his customers’ emails. He sought a civil liberties organization to represent him for this violation of his rights, but no existing organization understood the technology well enough to grasp the free speech and privacy issues at hand.
But a few well-informed technologists did understand. Mitch Kapor, former president of Lotus Development Corp.; John Perry Barlow, a Wyoming cattle rancher and lyricist for the Grateful Dead; and John Gilmore, an early employee of Sun Microsystems, with help from Apple co-founder Steve Wozniak, decided to do something about it – and so the Electronic Frontier Foundation was born in July 1990. The Steve Jackson Games case turned out to be an extremely important one for the early internet: For the first time, a court held that electronic mail deserves at least as much protection as telephone calls.
EFF's original logo, in use from 1990-2018
EFF continued to take on cases that set important precedents for the treatment of rights in cyberspace. In our second big case, Bernstein v. U.S. Department of Justice, the United States government prohibited a University of California mathematics Ph.D. student from publishing online an encryption program he had created. Years earlier, the government had placed encryption on the United States Munitions List, alongside bombs and flamethrowers, as a weapon to be regulated for national security purposes; our lawsuit established that written software code is speech protected by the First Amendment, and the further ruled that the export control laws on encryption violated Bernstein's rights by prohibiting his constitutionally protected speech. Now everyone has the right to "export" encryption software—by publishing it on the Internet—without prior permission from the U.S. government.
Since then we’ve fought against government and corporate abuses of our Constitutional rights, on issues including warrantless wiretapping by intelligence agencies, the panopticon of street-level surveillance that seeks to track everything we do, and the corporate surveillance that turns our clicks into their commodity, as well as issues of antitrust and intellectual property, artificial intelligence, cybersecurity, and much more. We are lawyers, technologists, activists, and lobbyists who work every day for the privacy, security and dignity of all who use technology - and if you use technology, this fight is yours, too.
EFF's Greatest HitsWhile many early battles over the right to communicate freely and privately stemmed from government censorship, today EFF is fighting for users on many other fronts as well.
Today, certain powerful corporations are attempting to shut down online speech, prevent new innovation from reaching consumers, and facilitating government surveillance. We challenge corporate overreach just as we challenge government abuses of power.
We also develop technologies that can help individuals protect their privacy and security online, which our technologists build and release freely to the public for anyone to use.
In addition, EFF is engaged in major legislative fights, beating back digital censorship bills disguised as intellectual property proposals, opposing attempts to force companies to spy on users, championing reform bills that rein in government surveillance, documenting police technology and where it's used, helping users protect themselves from surveillance, and much more.
Learn more about some of EFF's most impactful work— Download a PDF of our new catalog, "Now That's What I Call Digital Rights!
EFF's Cindy Cohn on The Daily Show! Tonight Monday, March 30
EFF Executive Director Cindy Cohn will be on The Daily Show tonight, Monday March 30, at 11 pm ET and PT, speaking with host Jon Stewart. Cindy will discuss her long history of fighting for privacy online and her new book, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press). The book details her own personal story alongside her role representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.
You can watch the interview on Comedy Central, and extended episodes are released shortly thereafter on Paramount Plus as well as in segments on YouTube. We will also share the interview when it is uploaded and available online as well.
About The Daily ShowThe Daily Show is a long-running comedy news show that covers the biggest headlines of the day. It has won 26 Primetime Emmy Awards and has introduced the world to now well-known actors and comedians such as Steve Carell, Samantha Bee, Ed Helms, and Trevor Noah, as well as hosts of their own current shows, Stephen Colbert and John Oliver.
US Tech Companies Must be Accountable in US Courts for Facilitating Persecution and Torture Abroad, EFF Urges US Supreme Court
SAN FRANCISCO – U.S. technology companies should be legally accountable in U.S. courts for building tools that purposefully and actively facilitate human rights abuses by foreign governments, the Electronic Frontier Foundation argued in a brief filed Friday to the U.S. Supreme Court.
The brief filed in the case of Cisco Systems, Inc., et al., v. Doe I, et al. urges the high court to uphold the U.S. Court of Appeals for the 9th Circuit’s 2023 ruling that U.S. corporations can be held liable under the Alien Tort Statute (ATS) – a law that lets noncitizens bring claims in U.S. federal court for international law violations – for taking actions in the U.S. that aided and abetted persecution and torture abroad.
“This is not a case about a company that merely provided routers or other general-purpose technologies to a foreign government. It is about a company that purposefully and actively assisted in the persecution of a religious group,” the brief says. “There is a growing set of companies—including American companies—that provide surveillance technologies that are vulnerable to, and indeed are being used to, support gross human rights abuses. Because of this, the outcome of this case will have profound implications for millions of people who rely on digital technologies in their everyday lives, including to practice their religion.”
The “Golden Shield” system that Cisco custom-built for the Chinese government was an essential component of persecution against the Falun Gong religious group—persecution that included online spying and tracking, detention, and torture. Victims reported that intercepted communications were used during torture sessions aimed at forcing them to renounce their religion. Falun Gong victims and their families sued Cisco in 2011 and a federal district judge dismissed the case in 2014. The case was delayed three times as the Supreme Court considered three prior ATS cases.
The 9th Circuit appeals court – after proceedings including an amicus brief from EFF – reversed that lower decision, holding that U.S. corporations can be held liable under the ATS for aiding and abetting human rights abuses abroad. It also held that a company does not need to have the “purpose” to facilitate human rights abuses in order to be held liable; it only needs to have “knowledge” that its assistance helped in such abuses. It then held that the plaintiffs’ allegations showed that Cisco’s actions met both standards. The court also held that the fact that a technology has legitimate uses does not shield a company from liability for other uses that led to human rights abuses when the standards of international law are met. Taken cumulatively, Cisco’s actions in the U.S. were sufficient to allow the case to proceed, the 9th Circuit ruled.
Cisco appealed to the Supreme Court, which granted review in January. The case, No. 24-856, is scheduled for argument on April 28.
Cisco Systems is just one of many U.S. companies that make surveillance systems, spyware, and other products used by governments to violate people’s human rights.
“This Court must not shut the courthouse door to victims of human rights abuses that are actively powered by American corporations,” the brief says. “In the digital age, repressive governments rarely act alone to violate human rights. They have accomplices—including technology companies that have the sophistication and technical know-how that those repressive governments lack.”
For EFF’s amicus brief to the U.S. Supreme Court: https://www.eff.org/document/2026-03-27-eff-amicus-brief-cisco-v-doe-scotus
For EFF’s Doe I v. Cisco case page: https://www.eff.org/cases/doe-i-v-cisco
For the U.S. Supreme Court docket: https://www.supremecourt.gov/docket/docketfiles/html/public/24-856.html
Contact: SophiaCopeSenior Staff Attorneysophia@eff.org CindyCohnExecutive Directorcindy@eff.org
Traffic Violation! License Plate Reader Mission Creep Is Already Here
A new report from 404 Media sheds light on how automated license plate readers (ALPRs) could be used beyond the press releases and glossy marketing materials put out by law enforcement agencies and ALPR vendors. In December 2025, Georgia State Patrol ticketed a motorcyclist for holding a cell phone in his hand. According to the report, the ticket read, “CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.”
If you’re thinking that this sounds outside of the scope of what ALPRs are supposed to do, you’re right. In November 2025, Flock Safety, the maker of the ALPR in question, wrote a post about how they definitely are in compliance with the Fourth Amendment to the U.S. Constitution. In this post, which highlighted what ALPRs are and what they are not, the company writes: “What it is not: Flock ALPR does not perform facial recognition, does not store biometrics, cannot be queried to find people, and is not used to enforce traffic violations.” (emphasis added)
Well, apparently their customers never got the memo and apparently the technology’s design does not explicitly prevent behavior the company officially and publicly disavows.
Or at least this used to be the case: Flock now lists six different companies providing traffic enforcement technology on its “Partner program” site. Public records also show that speed enforcement cameras have been connected to Flock's ALPR network.
EFF and other privacy advocates have long warned about mission creep when it comes to surveillance infrastructure. Police often swear that a piece of technology will only be used in a particular set of circumstances or to fight only the most serious crimes only to utilize it to fight petty crimes or watch protests.
We continue to urge cities, states, and even companies to end their relationship with Flock Safety because of the incompatibility between the mass surveillance it enables and its inability to protect civil liberties—including preventing mission creep.
Supreme Court Agrees With EFF: ISPs Don't Have To Be Copyright Enforcers
If your ISP can be liable for huge amounts of money for not terminating your access to the internet because of accusations that you—or someone in your household or college network—has committed copyright infringement, that is dangerous. We live in a world where high speed internet access is a necessity for participation in everyday life. That’s why liability for ISPs for their customers’ actions should not be expanded.
Last fall, EFF filed an amicus brief urging the U.S. Supreme Court to reject an expansive theory of secondary copyright liability that threatened to impose massive damages on internet service providers and other technology companies simply for offering widely used services. Yesterday, the Court agreed.
In Cox v. Sony, the Court reversed a Fourth Circuit decision that had upheld a billion-dollar verdict against internet provider Cox Communications. Writing for the majority, Justice Thomas explained that contributory liability is limited to two situations: when a defendant actively induces infringement, or when it provides a product or service that it knows is tailored for infringement.
This framework closely tracks the approach EFF urged in our amicus brief. As we explained, courts should look to patent law for guidance in defining the boundaries of secondary copyright liability. Patent law recognizes liability where a defendant actively induces infringement, or distributes a product knowing that it lacks substantial non-infringing uses. The Court’s opinion adopts that same basic structure.
EFF also emphasized the broader public interest at stake in preserving these limits. Expansive theories of secondary liability do not just affect large internet providers. They can chill innovation, threaten smaller technology companies, and undermine the development of general-purpose tools that millions of people rely on for lawful speech, creativity, education, and access to information. When liability turns on generalized knowledge that some users may infringe, service providers face pressure to over-police user activity or withdraw useful services altogether.
The Court also made clear that mere knowledge that some customers use a service to infringe is not enough. Copyright holders must show that the provider intended its service to be used for infringement. That intent can be established only through active inducement or by showing that the service is specifically designed for unlawful uses—not simply because the service provider failed to take affirmative steps to prevent infringement.
Applying this standard, the Court held that Cox could not be liable. There was no evidence that Cox encouraged or promoted infringement. The record instead showed that Cox implemented warning systems, suspended service, and in some cases terminated accounts in an effort to discourage unlawful activity.
Nor was Cox’s internet access service tailored to infringement. The Court emphasized that general-purpose internet connectivity is capable of substantial lawful uses. Treating the provision of such services as contributory infringement would improperly expand secondary liability beyond the limits recognized in prior Supreme Court decisions.
The Court also rejected the Fourth Circuit’s broader rule that supplying a service with knowledge it may be used to infringe is itself sufficient for liability. That theory conflicts with decades of precedent warning against imposing copyright liability based solely on knowledge or a failure to take additional preventive steps.
EFF is pleased with yesterday’s opinion. We will continue to advocate for the public’s ability to build, use, and innovate with new technologies.
Link to our amicus brief:
https://www.eff.org/document/us-s-ct-cox-v-sony-eff-et-al-amicus-brief
Link to the opinion:
https://www.supremecourt.gov/opinions/25pdf/24-171_bq7d.pdf
EFF Sues for Answers About Medicare's AI Experiment
SAN FRANCISCO – The Electronic Frontier Foundation (EFF) today filed a Freedom of Information Act (FOIA) lawsuit against the Centers for Medicare & Medicaid Services (CMS) seeking records about a multi-state program that is using AI to evaluate requests for medical care.
"Tasking an algorithm with making determinations about treatment can create unwarranted—and even discriminatory—delays or denials of necessary medical care," said Kit Walsh, EFF’s Director of AI and Access-to-Knowledge Legal Projects. "Given these serious risks, the public requires transparency that it hasn't gotten. We're suing to get badly needed answers about how Medicare's AI experiment works."
Announced by CMS Administrator Dr. Mehmet Oz last year, the pilot program known as WISeR (Wasteful and Inappropriate Service Reduction) uses AI to assess prior authorization requests from Medicare beneficiaries. Previously rare in original Medicare, prior authorization requires medical providers to obtain advance approval from a patient’s health insurer before delivering certain treatments or services as a condition of coverage.
Unfortunately, there is little information about how the AI algorithms used in WISeR work, including what training data they rely on. It remains unclear whether WISeR has any safeguards against systemic flaws such as algorithmic bias, privacy violations, and wrongful denials of care.
Healthcare experts, care providers, and lawmakers have all raised alarms that WISeR may cause serious harm to patients by relying on AI unless it has the necessary safeguards. Despite this widespread criticism, WISeR was rolled out in six states in January, potentially affecting as many as 6.4 million Medicare beneficiaries, according to one estimate.
By design, WISeR incentivizes contracted companies to deny prior approval against the best interests of patients. Vendors are compensated, in part, on the volume of healthcare services they deny and are entitled to as much as 20 percent of the associated savings. Just weeks after WISeR's launch, hospitals and other health care providers started reporting delays in care approval, communication gaps, and administrative strain.
Earlier this year, EFF submitted a FOIA request to CMS asking for records related to WISeR. Among other records, the request sought agreements with software vendors participating in WISeR; records related to any tests for accuracy, bias, or hallucinations in vendors' technology; and records related to any audits, monitoring, or evaluation of WISeR and participating vendors. To date, CMS has not provided any of these records to EFF. EFF's FOIA lawsuit asks for their immediate processing and release.
"The public has a right to know more about the algorithms driving decisions around their healthcare," said Tori Noble, Staff Attorney at EFF. "Without greater transparency, patients, providers, and policymakers will continue to be left in the dark.”
EFF thanks Stanford Law School's Juelsgaard Intellectual Property & Innovation Clinic for their help in preparing this lawsuit.
For the complaint: https://www.eff.org/document/complaint-eff-v-cms-medicare-wiser-foia
👓 Who's Really Watching What Smartglasses See? | EFFector 38.6
After years of tech industry experiments, smartglasses with embedded cameras and microphones have finally gone mainstream. And, disturbingly, sometimes it's not just their owners who are watching what these devices record. In this week's EFFector newsletter, we're taking a closer look at the privacy implications of Meta Ray-Bans, and sharing all the latest in the fight for privacy and free speech online.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This week's issue covers EFF's new executive director; how publishers blocking the Internet Archive threaten the web's historical record; and why you should think twice before buying or using Meta’s Ray-Bans.
Prefer to listen in? EFFector is now available on all major podcast platforms. This week, we're chatting with EFF Security and Privacy Activist Thorin Klosowski about smartglasses and privacy. And don't miss the EFFector news quiz. You can find the episode and subscribe on your podcast platform of choice:
%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fc139744a-aad2-4d31-8b5e-84764a13bf2f%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.comWant to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against online surveillance when you support EFF today!
Speaking Freely: Jacob Mchangama
Interviewer: Jillian York
Jacob Mchangama is a Danish lawyer, human-rights advocate, and public commentator. He is the founder and director of Justitia, a Copenhagen-based think tank focusing on human rights, freedom of speech, and the rule of law. His new book with Jeff Kosseff, The Future of Free Speech: Reversing the Global Decline of Democracy's Most Essential Freedom, comes out on April 7th.
Jillian York: Welcome, Jacob. I'm just going to kick off with a question that I ask everyone, which is: what does free speech mean to you?
Jacob Mchangama: I like to use the definition that Spinoza, the famous Dutch renegade philosopher, used. He said something along the lines, and I'm paraphrasing here, that free speech is the right of everyone to think what they want and say what they think, or the freedom to think what they want and say what they think. I think that's a pretty neat definition, even though it may not be fully exhaustive from sort of a legal perspective, I like that.
JY: Excellent. I really like that. I'd like to know what personally shaped your views and also what brought you to doing this work for a living.
JM: I was born in Copenhagen, Denmark, which is a very liberal, progressive, secular country. And for most of my youth and sort of young adulthood, I did not think much about free speech. It was like breathing the air. It was essentially a value that had already been won. This was up until sort of the mid-naughties. I think everyone was sort of surfing the wave of optimism about freedom and democracy at that time.
And then Denmark became sort of the epicenter of a global battle of values over religion, the relationship between free speech and religion with the whole cartoon affair. And that's really what I think made me think deep and hard about that, that suddenly people were willing to respond to cartoonists using crayons with AK-47s and killings, but also that a lot of people within Denmark suddenly said, “Well, maybe free speech doesn't include the right to offend, and maybe you're punching down on a vulnerable minority,” which I found to be quite an unpersuasive argument for restricting free speech.
But what's also interesting was that you saw sort of how positions on free speech shifted. So initially, people on the left were quite apprehensive about free speech because they perceived it to be about an attack on minorities, in this case, Muslim immigrants in Denmark. Then the center right government came into power in Denmark, and then the narrative quickly became, well, we need to restrict certain rights of hate preachers and others in order to defend freedom and democracy. And then suddenly, people on the right who had been free speech absolutists during the cartoon affair were willing to compromise on it, and people on the left who had been sort of, well, “maybe free speech has been taken too far” were suddenly adamant that this was going way too far, and unfortunately, that is very much with us to this day. It's difficult to find a principled, consistent constituency for free speech.
JY: That's a great way of putting it. I feel like, with obvious differences from country to country, it feels like that kind of polarization is true everywhere, including the bit about flipping sides. I guess my next question, then, is: what do you feel like most people get wrong about free speech?
JM: I think there's a tendency—and I'm talking especially in the West, in the traditional free and open democracies—I think there's a huge tendency to take all the benefits of free speech for granted and focus myopically on the harms, real and perceived, of speech. I mean, just the fact that you and I can sit here, you know, I don't know where you are in the world, but you and I can have a direct, live, uncensored conversation…that is something that you know was unimaginable not that long ago, and we just take that for granted. We take it for granted that we can have access to all the information in the world that would previously have required someone to spend years in libraries, traveling the world, finding rare manuscripts.
We take it for granted, but this is the difference between us and say dissidents in Iran or Russia or Venezuela. We take it for granted that we can go online and vent against our governments and say things, and we can also vent things on social issues that might be deeply offensive to other people, but generally we don't face the risk of being imprisoned or tortured. But that's just not the case in many other countries.
So, I think those benefits, and also, I would say, when you look at the historical angle, every persecuted or discriminated against group that has sought and achieved a higher degree of equal dignity, equal protection under the law, has relied on speech. First they relied on speech, then they could rely on free speech at some point, but initially they didn't have free speech right? So whether it's abolitionist the civil rights movement in the United States, you know my good friend Jonathan Rauch, who was sort of at the forefront of of securing same sex marriage in the United States, knows that was a fight that very much relied on speech. And women's rights…fierce women, who would protest outside the White House and burn in effigy figures of the President, would go to prison. Women didn't have political power. They didn't have guns. They didn't have economic power, they had speech, and that's what you need, to petition the government, to shine a light on abuse, to rally other allies and so on. And I think unfortunately, we've unlearned those hugely important precedents for why we have free speech today.
JY: I’m definitely going to come back to that. But first I want to ask you about the new book you have coming out with Jeff Kosseff, The Future of Free Speech: Reversing the Global Decline of Democracy's Most Essential Freedom. I'm very excited, I’ve pre-ordered it.
So, in light of that, I’ve got a two part question: First, what are some of the trends that concern you the most about what’s going on today? And then, what do you think we need to do to ensure that there is a future for free speech?
JM: So first of all, I was thrilled to be able to write it with Jeff, because Jeff is such an authority on First Amendment section 230 issues. But from the personal perspective, you could say that this book sort of continues where my previous book on the history of free speech finishes.
And so, based on the idea that we are living through a free speech recession that has become particularly acute in this digital age, where we see what I term as various waves of elite panic that lead to attempts to impose sort of top down controls on online speech in particular—and this is not only in the countries where you'd expect it, like China and Russia and Iran, but increasingly also in open democracies that used to be the heartland of free speech—there's a tendency, I think, in democracies, to view free speech no longer as sort of a competitive advantage against authoritarian states, or a right that would undermine authoritarians, but as sort of a Trojan horse which allows the enemies of democracies, both at home and abroad, to weaponize free speech against democracy, and so that's why the overwhelming
legislative initiatives and framing of free speech is often “this is a danger.” This is something we need to do something about. We need to do something about disinformation. We need to do something about hate speech. We need to do something about extremism. We need to do something about, you know, we need to have child safety laws. We need age verification. And you know, you know the list all too well.
JY: I do, absolutely.
JM: Where I think where free speech advocates often fall short, is that we're very good at sort of talking about the slippery slope and John Stuart Mill and all these things, and that's important, but very often we don't have compelling proposals to sell to people who are not sort of civil libertarians at heart, and who are generally in favor of free speech, but who are frightened about particular developments at particular manifestations of speech that they think have become so dangerous to you know, freedom, democracy, whatever interest that they're willing to compromise free speech.
And so we try to point to some concrete examples of—giving life to the old cliché—fighting bad speech with better speech. So some of those examples are counter speech. There are some great examples. One of them is from Brazil, where there was a black weather woman who was the first black weather woman to be sort of on a prominent TV channel, and she was met with brutal racism. So, you know, what should have been a happy moment for her became quite devastating. And so there was this NGO that printed billboards of these very nasty racist comments, blurred the identity of the user who had said it, but then put them in the neighborhoods where these people lived. So that was a very powerful way to confront Brazilians with the fact that, you know, racism is alive. It's right here in your neighborhood. And you know they used the N word and everything, and nothing was censored in terms of this racism, which was put right in front of it of everyone, and it actually led to a lot of people sort of deleting their comments and someone apologizing, and led to, I think, a fruitful debate in Brazilian society.
Then you have other types of counter speech. One of them is a Swedish journalist called Mina Dennert. She started the “I am here” movement. So it's a counter speech movement, which I think spans 150,000 volunteers across 15 countries. And they use counter speech online, typically on Meta platforms, I think, where they essentially gather together and push back against hate speech, not necessarily to convince the speaker that they're wrong, but to give support to those who are the victims, but also to essentially convince what is often termed the movable middle, to show them that there are people who disagree with racist hate speech, and there's actually empirical data to suggest that these can be effective strategies. You can also use humor.
Daryl Davis is a very extreme example. He's a black jazz musician who has made it his life mission to befriend members of the KKK. And he has converted around 200 members of the KKK, to essentially leave it and he does that by just having a conversation. Because if your worldview is that blacks are inferior and should not enjoy equal rights, and you have a conversation with someone in a way where it becomes impossible for you to uphold that worldview, because the person in front of you is clearly someone who's intelligent, articulate, who can counter all your your preconceived notions, then it becomes very difficult to uphold that worldview right? And you can imagine that those members who leave the KKK then become agents of change within their former communities.
So there are various counter speech strategies that have shown a promise, and at the Future of Free Speech [think tank] that I direct, we've developed these toolkits, and we do teachings around the world, I think we've translated them into nine or ten languages. So it's not a panacea, obviously, to everything that's going on, but it's something quite practical, I think. And the good thing about it is also that it doesn't depend on an official definition of hate speech. If you're concerned about a particular type of speech, you can use counter speech to counter it. But you're not engaging in censorship, and we don't have to agree on what the definition of hate speech is. In that way, it’s hopefully an empowering tool.
And another example: we talk about how Taiwan has been quite an inspiring case for using crowd sourced fact checking, for using sort of a bottom up approach to fighting disinformation from China, but also around Covid, so zero lockdowns and no centralized censorship, and they’re doing better than a lot of Western democracies that use more illiberal methods and the crowd sourced fact checking pioneered in Taiwan is what inspired Bird Watch on Twitter prior to its being taking over by Elon Musk, and which is now community notes on X, which I actually think for all the things you might dislike about X, is a feature that is quite promising.
JY: Definitely. I absolutely agree with that, and I'm really glad you mentioned your previous book, which I loved, and the idea of a free speech recession.
You’ve done so much of this work all over the world, and have learned from people in different places and tried to understand the challenges they’re facing in terms of free speech. We actually started this project, Speaking Freely, primarily to share those different perspectives and to bring them to our readership, the majority of which comes from the U.S. What I’d like to ask you, then, is what do you feel that we in the “West” or in more open societies have to learn from free speech activists in the rest of the world?
JM: Just…the bravery of say, Iranians who now face complete—and this was even before the attacks by the US and Israel—complete internet bans. But who have also relied on social media platforms and digital creativity to circumvent official propaganda and censorship. I think those types of societies provide sort of a real time experiment, right? You know, okay, we have we have social media, and it's messy, and sometimes it's ugly, and sometimes some of these tech companies do things that we disapprove of, but you know the cure in terms of further government control, for instance, let's say, getting rid of section 230, adding age verification laws, trying to create exceptions to the First Amendment in cyberspace…we have societies where that is happening, albeit, of course, at a very extreme scale. But would you really trade the freedoms, however messy they are, for that kind of society?
And then, I also worry a lot about the state of affairs in Europe, where I'm from, where it's not unusual if you're in Germany, to have the police show up at your door if you've insulted a powerful politician. For the book, I interviewed an Israeli, Jewish woman who lives in Berlin. She's on the far left and very opposed to to Israel's policies, and she's been arrested four times for for protesting with a plaque that says, “as an Israeli Jew, stop the genocide in Gaza.” And again, you can agree or disagree whether there's a genocide, but that's just political speech. Yet the optics of a Jew—an Israeli, Jewish woman—being arrested by German police in Berlin in the name of fighting antisemitism is, I think, absurd, right?
JY: I’m laughing only because I think I’ve said that exact sentence in an interview with the German press.
JM: But this is the reality right now. And I think it's also a good example of the fact that there have been people on the left in Europe who have said, well, we need to do something about the far right. And therefore it's okay to crack down, you know, use hate speech laws and so on. And then October 7 happened, and suddenly you see a lot of minorities and people on the left who are becoming the targets of laws against hate speech or glorification of terrorism and so on and so forth. And I think that's a powerful case for why you want a pretty hard nosed principle of consistent protection of free speech, also online. And, given the priorities of the current administration in the United States, I think that if the First Amendment and section 230 were not in place in the United States, the kind of laws that you have in Europe would be very moldable for the current administration to go after. I mean, it’s already going after its enemies, real and perceived, but it often loses in court exactly because of constitutional protections, including the First Amendment. But if that protection wasn't there, they would be much more successful, I think, in going after speech that they don't like.
JY: That’s such a fantastic answer, and I’m in total agreement. I was actually living in Berlin until quite recently and saw quite a bit of that firsthand. It’s really troubling.
I want to shift course for a moment. We hopefully have some young people reading this as well, and I think right now in this moment where age verification proposals are happening everywhere—which we at EFF are really concerned about—it’s important to speak to them as well. What advice would you give to young readers who are coming of age around the topic of free speech and who are interested in doing this sort of work?
JM: I think young people are obviously immersed in the digital age, and some of them may never have opened a physical book. I don't know. Maybe it's a Boomer prejudice when I say that, but, but, I don't think it's a stretch to imagine that the vast majority of speech and expression that they're confronted with is through devices of a sort. I think it's crucial to understand that, you know, the system of free speech was developed before that, and so not to focus solely on thinking about free speech only through the lens of the digital age. What came before it is really important to give you some perspective.
So that’s one thing, but I also have two kids, aged 13 and 16, so I’ve thought a lot and fought a lot about some of these issues. I understand where some of the age verification concerns come from. I have parental controls on my children's phones and devices, and try to control it as best as possible, because I do think there can be harms if you spend too much time. But on the other hand, I would also say—and this goes back to the harms and benefits—sometimes there's this analogy that people want to make that social media is like tobacco, which I think is such a poor comparison, because, you know, no one in the world would disagree that tobacco is extremely harmful, right? It's cancerous and all kinds of other things. There are no benefits to tobacco, but social media access, I think, is very different. For instance, I moved to the United States with my family three years ago. My children had no problem speaking English, doing well in school because of YouTube. They could speak almost with the accent, they were immersed into cultural idioms, and they could learn stuff. And also in terms of connections, they have friends back home, it would be very difficult for them to stay in touch the same way that they can now and have connections, if it wasn't due to technology. And so I think that social media for minors also has benefits that make it very, very different from the tobacco analogy.
Plus, I also think, and here I'm pointing my finger at Jonathan Haidt, that some of the evidence that is being pushed for these kinds of bans seem not to reflect scientific consensus, and that there's a lot of subject matter experts who actually think that the case is much more muddled than than the message that he has pushed in his best selling book, but which is now going the rounds.
But it amazed me to look at. First of all, let me say I've admired Jonathan Haidt for a long time. I loved his previous work, but I just feel like his crusade on social media for minors and age verification is…in a certain sense, he's gone down some of the roads that he warned against in some of his previous books, in terms of motivated reasoning and confirmation bias and so on. But I saw Jonathan Haidt praise the Minister of Digital Affairs for Indonesia for their age verification bill that is supposed to come into effect now. Indonesia is a country that right now, I think, has a bill in place that will give further powers to the government to ban LGBT content, and what’s the justification? Protecting children. It is a country where someone uploaded a Tiktok video where they said an Islamic prayer before eating pork…two years in prison, right? So it's a country that is in the lower half of Freedom House's Freedom on the Net rankings. So it's amazing to me that a good liberal Democrat like Jonathan Haidt would essentially lend his legitimacy to a country like Indonesia when no one, no serious person, can be in doubt that these kinds of laws will be used and abused by a country like Indonesia to crack down on religious and political, sexual minorities and dissent in general.
JY: Absolutely. And that actually fits really well with something that I've been thinking a lot about too. I know you've written a lot about the Brussels effect and I'm trying to look at the ways in which a similar effect—not necessarily coming from Brussels, of course—is shaping internet regulation in different directions, in terms of laws influencing other laws.
Now, in terms of laws influencing other laws, age verification is, I think, one of the big ones. I mean, seeing these laws modeled after things that the UK or Australia or the U.S. has proposed, and then, just being made so much worse, and then sometimes echoing back here as well. And I think Indonesia is such a great example of that.
JM: Yeah. I mean, Australia sort of opened the Pandora’s box, and everyone is rushing in now, and I think the consequences are likely to be grave, and I think it fits into another issue which I think is even more concerning, that is this rehabilitation or of the concept of digital sovereignty. If you went back 10 years ago and talked about digital sovereignty, you would say, “Well, this is something that they do in China or Russia,” but now digital sovereignty is shouted from the rooftops in Brussels and democracies.
And you know, I could maybe understand, if digital sovereignty meant, yes, we're going to protect our critical infrastructure, or we don't want to be overly reliant on American tech platforms, given the Trump administration's hostility towards Europe. But digital sovereignty now essentially means a concept of sovereignty which asserts that governments and institutions like the European Union have powers to determine what types of information and ideas their citizens should be confronted with. Now look up Article 19 in the Universal Declaration of Human Rights, what does it say? Everyone has the right to free expression, which includes, and I'm paraphrasing here, the right to share and impart ideas across frontiers, regardless of media, right? You know this. So now we're reverting back to an idea of free expression, which says that the government can now control what type of information that…if a foreign government or information that purports to undermine democratic values in a society, then the government has a right to censor it or require that an intermediary take mitigating steps towards it. I mean, I think that is really a recipe for disaster.
JY: I’m so glad you talked about that. I don’t even think everyone talking about digital sovereignty is working with the same definition.
JM: No no, digital sovereignty can mean a lot of things. But there’s no doubt that it’s now being stretched to also include pure information and ideas rather than critical infrastructure or industrial policy where it may have a more benign role to play.
JY: Absolutely. Well, we’ve covered a lot of territory, so I’m going to ask you my favorite question, the one we ask everyone: Who is your free speech hero?
JM: I think my free speech hero would be Frederick Douglass. To me, he’s just someone who epitomizes not only being a principled defender of free speech, but someone who did free speech in practice. In his autobiography—he wrote three, I think—but in one of them there’s a foreword by the great abolitionist William Lloyd Garrison, and he describes watching and listening to Frederick Douglass give one of his first public speeches in Nantucket in 1841 and Garrison describes the impact that Douglass had on this crowd and he says something along the lines of: “I think I never hated slavery so much as in that very moment.” So you can almost feel the impact of Douglass’s speech, and that’s the gold standard, right, for what speech can do and why it should be free.
JY: Such a great answer. Thank you.
JM: Thank you.
Digital Hopes, Real Power: Reflecting on the Legacy of the Arab Spring
This is the first installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings.
A new generation of protesters, raised on social media and often fluent in the tools of digital dissent, has taken to the streets in recent months and years. In Bangladesh, Iran, Togo, France, Uganda, Nepal, and more than a dozen other countries, young people have harnessed digital tools to mobilize at scale, shape political narratives, and sustain movements that might once have been easier to ignore or suppress.
The tools at their disposal are vast, allowing them to coordinate quickly and turn local grievances into visible, transnational moments of dissent. But each new tactic is met in turn: governments now implement draconian regulations and deploy sophisticated surveillance systems, content manipulation, and automated censorship to pre-empt, predict, and punish collective action.
This cycle of digital empowerment and repression is not new. In many ways, its roots can be traced to the 2011 uprisings that rippled across the Middle East and North Africa. Often referred to as the “Arab Spring,” these movements didn’t just reshape politics…they transformed how we talk about the internet, and how governments respond in times of protest, crisis, and conflict. Fifteen years later, the legacy of that moment still defines the terms of resistance and control in the digital age.
At the time, we were sold the comforting narrative that the internet would help bring about democracy, that connectivity itself was revolutionary, and that Silicon Valley’s products—particularly social media platforms—were aligned with the people. It was a narrative that tech executives were sometimes happy to amplify and certain Western governments were happy to believe.
But the same networks that helped protesters to organize and broadcast their demands beyond their own borders laid the groundwork for new forms of repression. Over the years, the same tools that were once celebrated as tools of dissent have become instruments for tracking, harassing, and prosecuting dissenters.
This series examines the digital legacy of the 2011 uprisings that shook the region: how governments refined censorship and surveillance after 2011, how platforms alternately resisted and enabled those efforts, and how a new generation of civil society has pushed back.
"Over the years, the same tools that were once celebrated as tools of dissent have become instruments for tracking, harassing, and prosecuting dissenters."
When Tunisian fruit vendor Mohamed Bouazizi set himself on fire on December 17, 2010, after repeated harassment by local officials, he could not have known the chain reaction his act would spark. After nearly twenty-three years in power, President Zine El Abidine Ben Ali faced a public fed up with repression. Protests spread across Tunisia, ultimately forcing him to flee.
In his final speech, Ben Ali promised reforms: a freer press and fewer internet restrictions. He left before either materialized. For Tunisians, who had lived for years under normalized censorship both online and off, the promises rang hollow.
At the time, Tunisia’s internet controls were among the most restrictive in the world. Reporting by the exiled outlet Nawaat documented a sophisticated filtering regime: DNS tampering, URL blocking, IP filtering, keyword censorship. Yet despite that machinery, Tunisians built a resilient blogging culture, often relying on circumvention tools to push information beyond their borders. When protests began—and before international media caught up—they were ready.
Eleven days after Ben Ali fled, Egyptians took to the streets. International headlines rushed to label it a “Twitter revolution,” mistaking a tool for a movement. Egypt’s government drew a similar conclusion. On January 26, authorities blocked Twitter and Facebook. The next day, they shut down the internet almost entirely, a foreshadowing of what we’d see fifteen years later in Iran.
As Egyptians fought to free their country from President Hosni Mubarak’s autocratic rule, protests swept across the region to Bahrain, where demonstrators gathered at the Pearl Roundabout before facing a brutal crackdown; to Syria, where early calls for reform spiraled into one of the most devastating conflicts of the century; to Morocco, where the February 20 Movement pushed for constitutional change. Outside of the region, movements took shape in Spain, Greece, Portugal, Iceland, the United States, and beyond.
In each context, digital platforms helped circulate images, testimonies, and tactics across borders. They created visibility—and, in turn, inspired a playbook. Governments watched not only their own populations but one another, quickly learning how to disrupt networks, identify organizers, and seize back control of the narrative.
Cause and Effect
To be clear, the internet didn’t create these movements. Decades of repression, corruption, labor organizing, and grassroots activism did. Later research confirmed what many in the region already understood: digital tools helped people share information and coordinate action, but they were neither the spark nor the engine of revolt.
But regardless, the myth of the “Twitter revolution” had consequences. The breathless coverage, and rapid policy reactions that followed shaped state strategy around the world. Governments across the region and well beyond invested heavily in surveillance technologies, developed new legal mechanisms, increased their own social media presence, and found ways to influence platforms. Internet blackouts, once rare, became a normalized tool of crisis response. And companies were forced into increasingly public decisions about whether to resist state pressure or comply.
When it comes to the internet, the legacy of the 2011 uprisings that swept the region and beyond is a story about power: how states moved to consolidate control online, how platforms—often under pressure—have narrowed the space for dissent, and how civil society has been forced to evolve to defend it.
This five-part series will take a deeper look at how the internet as a space for dissent and for hope has changed over the past fifteen years throughout the region and well beyond.
Nicole Ozer Named as Electronic Frontier Foundation’s Executive Director
SAN FRANCISCO – Nicole Ozer has been appointed as executive director of the Electronic Frontier Foundation effective June 1.
Ozer is a legal expert on privacy and surveillance, artificial intelligence, and digital speech. She currently serves as the inaugural executive director of the Center for Constitutional Democracy at the University of California College of the Law in San Francisco. From 2004-2025, she was founding director of the Technology and Civil Liberties Program at the American Civil Liberties Union of Northern California. Ozer will succeed Cindy Cohn, who has been with EFF for more than 25 years and served as its executive director since 2015.
EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development, with a mission to ensure that technology supports freedom, justice, and innovation for all people of the world. The organization celebrated its 35th anniversary in 2025.
"I am honored to lead EFF forward in these critical times. EFF’s global work to defend and advance rights, justice, and democracy in the digital age is fundamental to the future of our countries, our livelihoods, and literally our lives,” Ozer said. “I am ready to hit the ground running with EFF’s exceptional staff, board, and broad base of supporters and ensure that EFF is stronger than ever. Together, we can meet this moment and build a future where technology works for the people.”
“I couldn’t be happier to pass EFF’s reins over to Nicole,” Cohn said. “She has been our stalwart partner for many years in standing up for privacy, free speech and innovation online. I’m confident that she understands both the strong heart and the future potential of EFF especially as our work is more critical than ever.”
“Nicole Ozer is the ideal person to lead EFF during this unprecedented time in our nation’s history,” said EFF Board Chair Gigi Sohn. “She possesses all of the qualities necessary to lead the organization: great vision, strong management skills and deep substantive knowledge. The fact that she has worked alongside EFF for over two decades is icing on the cake. The EFF Board is excited to welcome Nicole and begin a new chapter in our history.”
Over her more than two decades leading public interest technology work, Ozer:
- spearheaded passage of the California Electronic Communications Privacy Act – the nation’s strongest electronic surveillance law, requiring a warrant for government access to electronic information;
- modernized California law to protect reading records in the digital age by helping to craft the Reader Privacy Act requiring a “super warrant” for government access;
- created a groundbreaking model law for local democratic oversight of surveillance systems which inspired 25 laws across the country that help safeguard the rights and safety of more than 17 million people;
- litigated civil liberties cases and drafted influential amicus briefs on technology issues at all levels of state and federal court, including the U.S. Supreme Court and California Supreme Court; and
- developed multi-year campaigns to strengthen the anti-surveillance policies related to social media surveillance and face recognition of major technology companies and foster stronger privacy and free expression protection for billions of people worldwide.
Ozer is a lecturer at the University of California, Berkeley School of Law; was a 2024-2025 technology and human rights fellow with the Carr-Ryan Center for Human Rights Policy at the Harvard Kennedy School; and in 2019 was a visiting researcher at the Berkeley Center for Law and Technology and a non-residential fellow with the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society.
Ozer's work has earned accolades including the Fearless Advocate Award from the American Constitution Society Bay Area, the James Madison Freedom of Information Award from the Society of Professional Journalists of Northern California, and a 2025 California Senate Members resolution commending her “unwavering dedication to defending and promoting civil liberties in the digital world.” Her writings on privacy and constitutional law have been published widely, and she regularly provides expert testimony for government proceedings, offers commentary in the press, speaks at academic conferences, and presents at national and global forums including South by Southwest and the Centre for European Policy Studies. She holds a law degree from the University of California, Berkeley School of Law and a bachelor’s in American Studies from Amherst College.
"It is incredibly exciting to welcome Nicole Ozer as our new leader at EFF at a time when the organization's mission couldn't be more essential,” said entrepreneur, activist, writer, and EFF Board member Anil Dash. "Nicole's unique skills promise to build on the foundation that Cindy Cohn established as Executive Director, preparing EFF to serve an even more vital role in protecting privacy and innovation."
Cohn first became involved with EFF in 1993 when EFF asked her to serve as the outside lead attorney in Bernstein v. Dept. of Justice, the successful First Amendment challenge to the U.S. export restrictions on cryptography. She served as EFF’s legal director and general counsel from 2000 through 2015, and as executive director since then. She also co-hosted EFF’s award-winning “How to Fix the Internet” podcast. Her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance, was published March 10 by MIT Press, and she is now conducting a national book tour.
EFF's Board of Directors last year assembled a committee which undertook a wide search for Cohn’s successor with assistance from leadership advisory firm Russell Reynolds Associates.
Contact: press@eff.org
UK Politicians Continue to Miss the Point in Latest Social Media Ban Proposal
The UK is moving forward with its efforts to ban social media for young people. Ahead of this week’s House of Lords debate on the topic, we’re getting you situated with a primer on what’s been happening and what it all means.
On 9 March, the House of Commons discussed amendments tabled by the House of Lords in the government’s flagship legislation, the Children’s Wellbeing and Schools Bill.
The House of Lords previously tabled an amendment to “prevent children under the age of 16 from becoming or being users” of “all regulated user-to-user services,” to be implemented by “highly-effective age assurance measures,” which effectively banned under-16s from social media. When this proposal came before the House of Commons, MPs defeated it by 307 votes to 173.
Instead, the Commons proposed its own amendment: enabling the Secretary of State to introduce provisions “requiring providers of specified internet services” to prevent access by children, under age 18 rather than 16, to specified internet services or to specified features; and to restrict access by children to specified internet services which ministers provide.
Who does this give powers to?The Commons proposal redirects power from the UK Parliament and the UK’s independent telecom regulator Ofcom to the Secretary of State for Science, Innovation and Technology, currently Liz Kendall, who will be able to restrict internet access for young people and determine what content is considered harmful…just because she can. The amendment also empowers the Secretary of State to limit VPN use for under 18s, as well as restrict access to addictive features and change the age of digital consent in the country; for example, preventing under-18s from playing games online after a certain time.
Why is this a problem?This process is devoid of checks or accountability mechanisms as ministers will not be required to demonstrate specific harms to young people, which essentially unravels years-long efforts by Ofcom to assess online services according to their risks. And given the moment the UK is currently in, such as refusing to protect trans and LGBTQ+ communities and flaming hostile and racist discourses, it is not unlikely that we’ll see ministers start restricting content that they ideologically or morally feel opposed to, rather than because the content is harmful based, as established by evidence and assessed pursuant to established human rights principles.
We know from other jurisdictions like the United States that legislation seeking to protect young people typically sweeps up a slew of broadly-defined topics. Some block access to websites that contain some “sexual material harmful to minors,” which has historically meant explicit sexual content. But some states are now defining the term more broadly so that “sexual material harmful to minors” could encompass anything like sex education; others simply list a variety of vaguely-defined harms. In either instance, this bill would enable ministers to target LGBTQ+ content online by pushing this behind an under-18s age gate, and this risk is especially clear given what we already know about platform content policies.
How will this impact young people?The internet is an essential resource for young people (and adults) to access information, explore community, and find themselves. Beyond being spaces where people can share funny videos and engage with enjoyable content, social media enables young people to engage with the world in a way that transcends their in-person realm, as well as find information they may not feel safe to access offline, such as about family abuse or their sexuality. In severing this connection to people and information by banning social media, politicians are forcing millions of young people into a dark and censored world.
How did each party vote?The initial push to ban under-16s from social media came from the Conservative Party, who have since accused the UK’s Prime Minister Keir Starmer of “dither and delay” for not committing to the ban. The Liberal Democrats have also called this “not good enough.” The Labour Party itself is split, with 107 Labour Party MPs abstaining in the vote on the House of Lords amendment.
But we know that the issue of young people’s online safety is a polarizing topic that politicians have—and will continue to—weaponize for public support, regardless of their actual intentions. This is why we will continue to urge policymakers and regulators to protect people’s rights and freedoms online at all moments, and not just take the easy route for a quick boost in the polls.
How does this bill connect to the Online Safety Act?The draft Children’s Wellbeing and Schools Bill that came from the Lords provided that any regulation pertaining to the well-being of young people on social media “must be treated as an enforceable requirement” with the Online Safety Act. The Commons amendment, however, starts out by inserting a new clause that amends the Online Safety Act.
For more than six years, we’ve been calling on the UK government to pass better legislation around regulating the internet, and when the Online Safety Act passed we continued to advocate for the rights of people on the internet—including young people—as Ofcom implemented the legislation. This has been a protracted effort by civil society groups, technologists, tech companies, and others participating in Ofcom's consultation process and urging the regulator to protect internet users in the UK.
The MPs amendment essentially rips this up. Technology Secretary Liz Kendall recently said that ministers intended to go further than the existing Online Safety Act because it was “never meant to be the end point, and we know parents still have serious concerns. That is why I am prepared to take further action.” But when this further action is empowering herself to make arbitrary decisions on content and access, and banning under-18s from social media, this causes much more harm than it solves.
Is the UK alone in pushing legislation like this?Sadly, no. Calls to ban social media access for young people have gained traction since Australia became the first country in the world to enforce one back in December. On 5 March, Indonesia announced a ban on social media and other “high-risk” online platforms for users under 16. A few days later, new measures came into effect in Brazil that restricts social media access for under-16s, who must now have their accounts linked to a legal guardian. Other countries like Spain and the Philippines have this year announced plans to ban social media for under-16s, with legislation currently pending to implement this.
What are the next steps?The Children's Wellbeing and Schools Bill returns to the House of Lords on 25 March for consideration of the new Commons amendments. The bill will only become law if both Houses agree to the final draft.
We will continue to stand up against these proposals—not only to young people’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. The issue of online safety is not solved through technology alone, especially not through a ban, and young people deserve a more intentional approach to protecting their safety and privacy online, not this lazy strategy that causes more harm than it solves.
We encourage politicians in the UK to look into what is best, not what is easy, and explore less invasive approaches to protect all people from online harms.
Congress Is Dropping the Ball with a Clean Extension of FISA
Two years ago, Congress passed the “Reforming Intelligence and Securing America” Act (RISAA) that included nominal reforms to Section 702 of the Foreign Intelligence Surveillance Act (FISA). The bill unfortunately included some problematic expansions of the law—but it also included a relatively big victory for civil liberties advocates: Section 702 authorities were only extended for two years, allowing Congress to continue the important work of negotiating a warrant requirement for Americans as well as some other critical reforms.
However, Congress clearly did not continue this work. In fact, it now appears that Congress is poised to consider another extension of this program without even attempting to include necessary and common sense reforms. Most notably, Congress is not considering a requirement to obtain a warrant before looking at data on U.S. persons that was indiscriminately and warrantlessly collected. House Speaker Mike Johnson confirmed that “the plan is to move a clean extension of FISA … for at least 18 months.”
Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole, told the press he would vote for a clean extension of FISA, claiming that RISAA included enough reforms for the moment.
It’s important to note RISAA was just a reauthorization of this mass surveillance program with a long history of abuse. Prior to the 2024 reauthorization, Section 702 was already misused to run improper queries on peaceful protesters, federal and state lawmakers, Congressional staff, thousands of campaign donors, journalists, and a judge reporting civil rights violations by local police. RISAA further expanded the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. As we said when it passed, overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.
Section 702 should not be reauthorized without any additional safeguards or oversight. Fortunately, there are currently three reform bills for Congress to consider: SAFE, PLEWSA, and GSRA. While none of these bills are perfect, they are all significantly better than the status quo, and should be considered instead of a bill that attempts no reform at all.
Mass spying—accessing a massive amount of communications by and with Americans first and sorting out targets second and secretly—has always been a problem for our rights. It was a problem at first when President George W. Bush authorized it in secret without Congressional or court oversight. And it remained a problem even after the passage of Section 702 in 2008 created the possibility of some oversight. Congress was right that this surveillance is dangerous, and that's why it set Section 702 up for regular reconsideration. That reconsideration has not occurred, even as the circumstances of the NSA, Justice Department, and FBI leadership, have radically changed. Reform is long overdue, and now it's urgent.
FCC Chair Carr’s Threats to Punish Broadcasters Are Unconstitutional
EFF joined other digital rights and civil liberties organizations in calling out the unconstitutionality of Federal Communications Commission chair Brendan Carr’s recent threats to punish broadcasters for airing statements he disagrees with.
Carr’s recent threats, like his past threats, are unconstitutional efforts to coerce news coverage that favors President Donald Trump. He wrongly claims that the FCC’s “public interest” standard allows him and the commission to revoke the licenses of broadcasters who publish news that is unflattering to the government is anathema to our country’s core constitutional values.
The First Amendment constrains the FCC’s authority to force broadcasters to toe the government’s line, even though broadcast licensees are required to operate in the “public interest, convenience, and necessity.” Imposing restrictions on licensees’ speech, especially viewpoint-based limitations, are still subject to First Amendment scrutiny even if, in some circumstances, that scrutiny differs somewhat from that applied to non-broadcast media. And the “public interest” requirement, as it were, has never been interpreted to allow the type of viewpoint-based punishment that Carr has threatened here.
Everyone agrees that news reporting should strive for accuracy, but Carr’s threats have little do with that. Instead, his allegations of "falsity" are a proxy for retaliation based on (1) Carr’s subjective policy disagreements; (2) any criticism of Trump and the administration broadly; (3) treatment of anything that is not the official US government line about the Iran War as “false.”
We join the call for Carr to withdraw these threats.
- Civil Society Letter to FCC Chairman Barr
