EFF: Updates
Speaking Freely: Sami Ben Gharbia
Interviewer: Jillian York
Sami Ben Gharbia is a Tunisian human rights campaigner, blogger, writer and freedom of expression advocate. He founded Global Voices Advocacy, and is the co-founder and current publisher of the collective media organization Nawaat, which won the EFF Award in 2011.
Jillian York: So first, what is your personal definition, or how do you conceptualize freedom of expression?
Sami Ben Gharbia: So for me, freedom of expression, it is mainly as a human. Like, I love the definition of Arab philosophers to human beings, we call it “speaking animal”. So that's the definition in logic, like the science of logic, meditated on by the Greeks, and that defines a human being as a speaking animal, which means later on. Descartes, the French philosopher, describes it like the Ergo: I think, so I am. So the act of speaking is an act of thinking, and it's what makes us human. So this is my definition that I love about freedom of expression, because it's the condition, the bottom line of our human being.
JY: I love that. Is that something that you learned about growing up?
SBG: You mean, like, reading it or living?
JY: Yeah, how did you come to this knowledge?
SBG: I read a little bit of logics, like science of logic, and this is the definition that the Arabs give to define what is a human being; to differentiate us from, from plants or animals, or, I don't know, rocks, et cetera. So the humans are speaking, animals,
JY: Oh, that's beautiful.
SBG: And by speaking, it's in the Arabic definition of the word speaking, it's thinking. It's equal to thinking.
JY: At what point, growing up, did you realize…what was the turning point for you growing up in Tunisia and realizing that protecting freedom of expression was important?
SBG: Oh, I think, I was born in 1967 and I grew up under an authoritarian regime of the “father” of this Tunisian nation, Bourghiba, the first president of Tunisia, who got us independence from France. And during the 80s, it was very hard to find even books that speak about philosophy, ideology, nationalism, Islamism, Marxism, etc. So to us, almost everything was forbidden. So you need to hide the books that you smuggle from France or from libraries from other cities, et cetera. You always hide what you are reading because you do not want to expose your identity, like you are someone who is politically engaged or an activist. So, from that point, I realized how important freedom of expression is, because if you are not allowed even to read or to buy or to exchange books that are deemed to be controversial or are so politically unacceptable under an authoritarian regime, that's where the fight for freedom of expression should be at the forefront of of any other fights. That's the fight that we need to engage in in order to secure other rights and freedoms.
JY: You speak a number of languages, at what point did you start reading and exploring other languages than the one that you grew up speaking?
SBG: Oh, I think, well, we learn Arabic, French and English in school, and like, primary school, secondary school, so these are our languages that we take from school and from our readings, etc, and interaction with other people in Tunisia. But my first experience living in a country that speaks another language that I didn't know was in Iran. So I spent, in total, one and a half years there in Iran, where I started to learn a fourth language that I really intended to use. It's not a Latin language. It is a special language, although they use almost the same letters and alphabet with some difference in pronunciation and writing, but but it was easy for an Arab speaking native Tunisian to learn Farsi due to the familiarity with the alphabets and familiarity with the pronunciation of most of the alphabet itself. So, that's the first case where I was confronted with a foreign language. It was Iran. And then during my exile in the Netherlands, I was confronted by another family of languages, which is Dutch from the family of Germanic languages, and that's the fifth language that I learned in the Netherlands.
JY: Wow. And how do you feel that language relates to expression? For you?
SBG: I mean…language, it's another word. It's another universe. Because language carries culture, carries knowledge, carries history, customs. So it's a universe that is living. And once you learn to speak a new language, actually, you embrace another culture. You are more open in the way of understanding and accepting differences between other cultures, and I think that's how it makes your openness much more elastic. Like you accept other cultures more, other identities, and then you are not afraid anymore. You're not scared anymore from other identities, let's say, because I think the problem of civilization and crisis or conflict starts from ignorance—like we don't know the others, we don't know the language, we don't know the customs, the culture, the heritage, the history. That's why we are scared of other people. So the language is the first, let's say, window to other identity and acceptance of other people
JY: And how many languages do you speak now?
SBG: Oh, well, I don't know. Five for sure, but since I moved to exile a second time now, to Spain, I started learning Spanish, and I've been traveling a lot in Italy, started learning some some Italian, but it is confusing, because both are Latin languages, and they share a lot of words, and so it is confusing, but it is funny. I'm not that young to learn quickly, but I'm 58 years old, so it's not easy for someone my age to learn a new language quickly, especially when you are confused about languages from the same family as Latin.
JY: Oh, that's beautiful, though. I love that. All right, now I want to dig into the history of [2011 EFF Award winner] Nawaat. How did it start?
SBG: So Nawaat started as a forum, like in the early 2000s, even before the phenomena of blogs. Blogs started later on, maybe 2003-4, when they became the main tools for expression. Before that, we had forums where people debate ideas, anything. So it started as a forum, multiple forums hosted on the same domain name, which is Nawaat.org and little by little, we adopted new technology. We moved it. We migrated the database from from the forum to CMS, built a new website, and then we started building the website or the blog as a collective blog where people can express themselves freely, and in a political context where, similar to many other countries, a lot of people express themselves through online platforms because they are not allowed to express themselves freely through television or radio or newspaper or magazines in in their own country.
So it started mainly as an exiled media. It wasn't journalistically oriented or rooted in journalism. It was more of a platform to give voices to the diaspora, mainly the exiled Tunisian diaspora living in exile in France and in England and elsewhere. So we published Human Rights Reports, released news about the situation in Tunisia. We supported the opposition in Tunisia. We produced videos to counter the propaganda machine of the former President Ben Ali, etc. So that's how it started and evolved little by little through the changing in the tech industry, from forums to blogs and then to CMS, and then later on to to adopt social media accounts and pages. So this is how it started and why we created it that like that was not my decision. It was a friend of mine, we were living in exile, and then we said, “why not start a new platform to support the opposition and this movement in Tunisia?” And that's how we did it at first, it was fun, like it was something like it was a hobby. It wasn't our work. I was working somewhere else, and he was working something else. It was our, let's say hobby or pastime. And little by little, it became our, our only job, actually.
JY: And then, okay, so let's come to 2011. I want to hear now your perspective 14 years later. What role do you really feel that the internet played in Tunisia in 2011?
SBG: Well, it was a hybrid tool for liberation, etc. We know the context of the internet freedom policy from the US we know, like the evolution of Western interference within the digital sphere to topple governments that are deemed not friendly, etc. So Tunisia was like, a friend of the West, very friendly with France and the United States and Europe. They loved the dictatorship in Tunisia, in a way, because it secured the border. It secured the country from, by then, the Islamist movement, et cetera. So the internet did play a role as a platform to spread information and to highlight the human rights abuses that are taking place in Tunisia and to counter the narrative that is being manipulated then by the government agency, state agency, public broadcast channel, television news agency, etc.
And I think we managed it like the big impact of the internet and the blogs by then and platforms like now. We adopted English. It was the first time that the Tunisian opposition used English in its discourse, with the objective to bridge the gap between the traditional support for opposition and human rights in Tunisia that was mainly was coming from French NGOs and human rights organization towards international support, and international support that is not only coming from the traditional, usual suspects of Human Rights Watch, Amnesty International, Freedom House, et cetera. Now we wanted to broaden the spectrum of the support and to reach researchers, to reach activists, to reach people who are writing about freedom elsewhere. So we managed to break the traditional chain of support between human rights movements or organizations and human rights activists in Tunisia, and we managed to broaden that and to reach other people, other audiences that were not really touching what was going on in Tunisia, and I think that's how the Internet helped in the field of international support to the struggle in Tunisia and within Tunisia.
The impact was, I think, important to raise awareness about human rights abuses in the country, so people who are not really politically knowledgeable about the situation due to the censorship and due to the problem of access to information which was lacking in Tunisia, the internet helped spread the knowledge about the situation and help speed the process of the unrest, actually. So I think these are the two most important impacts within the country, to broaden the spectrum of the people who are reached and targeted by the discourse of political engagement and activism, and the second is to speed the process of consciousness and then the action in the street. So this is how I think the internet helped. That's great, but it wasn't the main tool. I mean, the main tool was really people on the ground and maybe people who didn't have access to the internet at all.
JY: That makes sense. So what about the other work that you were doing around that time with the Arabloggers meetings and Global Voices and the Arab Techies network. Tell us about that.
SBG: Okay, so my position was the founding director of Global Voices Advocacy, I was hired to found this, this arm of advocacy within Global Voices. And that gave me the opportunity to understand other spheres, linguistic spheres, cultural spheres. So it was beyond Tunisia, beyond the Arab world and the region. I was in touch with activists from all over the world. I mean by activists, I mean digital activists, bloggers that are living in Latin America or in Asia or in Eastern Europe, et cetera, because one of the projects that I worked on was Threatened Voices, which was a map of all people who were targeted because of their online activities. That gave me the opportunity to get in touch with a lot of activists.
And then we organized the first advocacy meeting. It was in Budapest, and we managed to invite like 40 or 50 activists from all over the world, from China, Hong Kong, Latin America, the Arab world, Eastern Europe, and Africa. And that broadened my understanding of the freedom of expression movement and how technology is being used to foster human rights online, and then the development of blog aggregators in the world, and mainly in the Arab world, like, each country had its own blog aggregator. That helped me understand those worlds, as did Global Voices. Because Global Voices was bridging the gap between what is being written elsewhere, through the translation effort of Global Voices to the English speaking world and vice versa, and the role played by Global Voices and Global Voices Advocacy made the space and the distance between all those blogospheres feel very diminished. We were very close to the blogosphere movement in Egypt or in Morocco or in Syria and elsewhere.
And that's how, Alaa Abd El Fattah and Manal Bahey El-Din Hassan and myself, we started thinking about how to establish the Arab Techies collective, because the needs that we identified—there was a gap. There was a lack of communication between pure techies, people who are writing code, building software, translating tools and even online language into Arabic, and the people who are using those tools. The bloggers, freedom of expression advocates, et cetera. And because there are some needs that were not really met in terms of technology, we thought that bringing these two words together, techies and activists would help us build new tools, translate new tools, make tools available to the broader internet activists. And that's how the Arab Techies collective was born in Cairo, and then through organizing the Arabloggers meetings two times in Beirut, and then the third in Tunisia, after the revolution.
It was a momentum for us, because it, I think it was the first time in Beirut that we brought bloggers from all Arab countries, like it was like a dream that was really unimaginable but at a certain point, but we made that happen. And then what they call the Arab revolution happened, and we lost contact with each other, because everybody was really busy with his or her own country's affairs. So Ali was really fully engaged in Egypt myself, I came back to Tunisia and was fully engaged in Tunisia, so we lost contact, because all of us were having a lot of trouble in their own country. A lot of those bloggers, like who attended the Arab bloggers meetings, few of them were arrested, few of them were killed, like Bassel was in prison, people were in exile, so we lost that connection and those conferences that brought us together, but then we've seen SMEX like filling that gap and taking over the work that started by the Arab techies and the Arab bloggers conference.
JY: We did have the fourth one in 2014 in Amman. But it was not the same. Okay, moving forward, EFF recently published this blog post reflecting on what had just happened to Nawaat, when you and I were in Beirut together a few weeks ago. Can you tell me what happened?
SBG: What happened is that they froze the work of Nawaat. Legally, although the move wasn't legal, because for us, we were respecting the law in Tunisia. But they stopped the activity of Nawaat for one month. And this is according to an article from the NGO legal framework, that the government can stop the work of an NGO if the NGO doesn't respect certain legal conditions; for them Nawaat didn't provide enough documentation that was requested by the government, which is a total lie, because we always submit all documentation on time to the government. So they stopped us from doing our job, which is what we call in Tunisia, an associated media.
It's not a company, it's not a business. It's not a startup. It is an NGO that is managing the website and the media, and now it has other activities, like we have the online website, the main website, but we also have a festival, which is a three day festival in our headquarters. We have offline debates. We bring actors, civil society, activists, politicians, to discuss important issues in Tunisia. We have a quality print magazine that is being distributed and sold in Tunisia. We have an innovation media incubation program where we support people to build projects through journalism and technology. So we have a set of offline projects that stopped for a month, and we also stopped publishing anything on the website and all our social media accounts. And now what? It's not the only one. They also froze the work of other NGOs, like the Tunisian Association of Democratic Women, which is really giving support to women in Tunisia. Also the Tunisian Forum for Social and Economic Rights, which is a very important NGO giving support to grassroots movements in Tunisia. And they stopped Aswat Nissa, another NGO that is giving support to women in Tunisia. So they targeted impactful NGOs.
So now what? It's not an exception, and we are very grateful to the wave of support that we got from Tunisian fellow citizens, and also friendly NGOs like EFF and others who wrote about the case. So this is the context in which we are living, and we are afraid that they will go for an outright ban of the network in the future. This is the worst case scenario that we are preparing ourselves for, and we might face this fate of seeing it close its doors and stop all offline activities that are taking place in Tunisia. Of course, the website will remain. We need to find a way to keep on producing, although it will really be risky for our on-the-ground journalists and video reporters and newsroom team, but we need to find a solution to keep the website alive. As an exiled media it's a very probable scenario and approach in the future, so we might go back to our exile media model, and we will keep on fighting.
JY: Yes, of course. I'm going to ask the final question. We always ask who someone’s free speech hero is, but I’m going to frame it differently for you, because you're somebody who influenced a lot of the way that I think about these topics. And so who's someone that has inspired you or influenced your work?
SBG: Although I started before the launch of WikiLeaks, for me Julian Assange was the concretization of the radical transparency movement that we saw. And for me, he is one of the heroes that really shaped a decade of transparency journalism and impacted not only the journalism industry itself, like even the established and mainstream media, such as the New York Times, Washington Post, Der Spiegel, et cetera. Wikileaks partnered with big media, but not only with big media, also with small, independent newsrooms in the Global South. So for me, Julian Assange is an icon that we shouldn't forget. And he is an inspiration in the way he uses technology to to fight against big tech and state and spy agencies and war crimes.
Fair Use is a Right. Ignoring It Has Consequences.
Fair use is not just an excuse to copy—it’s a pillar of online speech protection, and disregarding it in order to lash out at a critic should have serious consequences. That’s what we told a federal court in Channel 781 News v. Waltham Community Access Corporation, our case fighting copyright abuse on behalf of citizen journalists.
Waltham Community Access Corporation (WCAC), a public access cable station in Waltham, Massachusetts, records city council meetings on video. Channel 781 News (Channel 781), a group of volunteers who report on the city council, curates clips from those recordings for its YouTube channel, along with original programming, to spark debate on issues like housing and transportation. WCAC sent a series of takedown notices under the Digital Millennium Copyright Act (DMCA), accusing Channel 781 of copyright infringement. That led to YouTube deactivating Channel 781’s channel just days before a critical municipal election. Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its takedown notices under an important but underutilized provision of the DMCA.
The DMCA gives copyright holders a powerful tool to take down other people’s content from platforms like YouTube. The “notice and takedown” process requires only an email, or filling out a web form, in order to accuse another user of copyright infringement and have their content taken down. And multiple notices typically lead to the target’s account being suspended, because doing so helps the platform avoid liability. There’s no court or referee involved, so anyone can bring an accusation and get a nearly instantaneous takedown.
Of course, that power invites abuse. Because filing a DMCA infringement notice is so easy, there’s a temptation to use it at the drop of a hat to take down speech that someone doesn’t like. To prevent that, before sending a takedown notice, a copyright holder has to consider whether the use they’re complaining about is a fair use. Specifically, the copyright holder needs to form a “good faith belief” that the use is not “authorized by the law,” such as through fair use.
WCAC didn’t do that. They didn’t like Channel 781 posting short clips from city council meetings recorded by WCAC as a way of educating Waltham voters about their elected officials. So WCAC fired off DMCA takedown notices at many of Channel 781’s clips that were posted on YouTube.
WCAC claims they considered fair use, because a staff member watched a video about it and discussed it internally. But WCAC ignored three of the four fair use factors. WCAC ignored that their videos had no creativity, being nothing more than records of public meetings. They ignored that the clips were short, generally including one or two officials’ comments on a single issue. They ignored that the clips caused WCAC no monetary or other harm, beyond wounded pride. And they ignored facts they already knew, and that are central to the remaining fair use factor: by excerpting and posting the clips with new titles, Channel 781 was putting its own “spin” on the material - in other words, transforming it. All of these facts support fair use.
Instead, WCAC focused only on the fact that the clips they targeted were not altered further or put into a larger program. Looking at just that one aspect of fair use isn’t enough, and changing the fair use inquiry to reach the result they wanted is hardly the way to reach a “good faith belief.”
That’s why we’re asking the court to rule that WCAC’s conduct violated the law and that they should pay damages. Copyright holders need to use the powerful DMCA takedown process with care, and when they don’t, there needs to be consequences.
Stand Together to Protect Democracy
What a year it’s been. We’ve seen technology unfortunately misused to supercharge the threats facing democracy: dystopian surveillance, attacks on encryption, and government censorship. These aren’t abstract dangers. They’re happening now, to real people, in real time.
EFF’s lawyers, technologists, and activists are pushing back. But we need you in this fight.
MAKE A YEAR END DONATION—HELP EFF UNLOCK CHALLENGE GRANTS!
If you donate to EFF before the end of 2025, you’ll help fuel the legal battles that defend encryption, the tools that protect privacy, and the advocacy that stops dangerous laws—and you’ll help unlock up to $26,200 in challenge grants.
📣 Stand Together: That's How We Win 📣The past year confirmed how urgently we need technologies that protect us, not surveil us. EFF has been in the fight every step of the way, thanks to support from people like you.
Get free gear when you join EFF!
This year alone EFF:
- Launched a resource hub to help users understand and fight back against age verification laws.
- Challenged San Jose's unconstitutional license plate reader database in court.
- Sued demanding answers when ICE spotting apps were mysteriously taken offline.
- Launched Rayhunter to detect cell site simulators.
- Pushed back hard against the EU's Chat Proposal that would break encryption for millions.
After 35 years of defending digital freedoms, we know what's at stake: we must protect your ability to speak freely, organize safely, and use technology without surveillance.
We have opportunities to win these fights, and you make each victory possible. Donate to EFF by December 31 and help us unlock additional grants this year!
Already an EFF Member? Help Us Spread the Word!EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:
We need to stand together and ensure technology works for us, not against us. Donate any amount to EFF by Dec 31, and you'll help unlock challenge grants! https://eff.org/yec
Bluesky | Facebook | LinkedIn | Mastodon
(more at eff.org/social)
_________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.
Local Communities Are Winning Against ALPR Surveillance—Here’s How: 2025 in Review
Across ideologically diverse communities, 2025 campaigns against automated license plate reader (ALPR) surveillance kept winning. From Austin, Texas to Cambridge, Massachusetts to Eugene, Oregon, successful campaigns combined three practical elements: a motivated political champion on city council, organized grassroots pressure from affected communities, and technical assistance at critical decision moments.
The 2025 Formula for Refusal
- Institutional Authority: Council members leveraging "procurement power"—local democracy's most underutilized tool—to say no.
- Community Mobilization: A base that refuses to debate "better policy" and demands "no cameras."
- Shared Intelligence: Local coalitions utilizing shared research on contract timelines and vendor breaches.
In 2025, organizers embraced the "ugly" win: prioritizing immediate contract cancellations over the "political purity" of perfect privacy laws. Procurement fights are often messy, bureaucratic battles rather than high-minded legislative debates, but they stop surveillance where it starts—at the checkbook. In Austin, more than 30 community groups built a coalition that forced a contract cancellation, achieving via purchasing power what policy reform often delays.
In Hays County, Texas, the victory wasn't about a new law, but a contract termination. Commissioner Michelle Cohen grounded her vote in vendor accountability, explaining: "It's more about the company's practices versus the technology." These victories might lack the permanence of a statute, but every camera turned off built a culture of refusal that made the next rejection easier. This was the organizing principle: take the practical win and build on it.
Start with the HarmWinning campaigns didn't debate technical specifications or abstract privacy principles. They started with documented harms that surveillance enabled. EFF's research showing police used Flock's network to track Romani people with discriminatory search terms, surveil women seeking abortion care, and monitor protesters exercising First Amendment rights became the evidence organizers used to build power.
In Olympia, Washington, nearly 200 community members attended a counter-information rally outside city hall on Dec. 2. The DeFlock Olympia movement countered police department claims point-by-point with detailed citations about data breaches and discriminatory policing. By Dec. 3, cameras had been covered pending removal.
In Cambridge, the city council voted unanimously in October to pause Flock cameras after residents, the ACLU of Massachusetts, and Digital Fourth raised concerns. When Flock later installed two cameras "without the city's awareness," a city spokesperson called it a "material breach of our trust" and terminated the contract entirely. The unexpected camera installation itself became an organizing moment.
The Inside-Outside GameThe winning formula worked because it aligned different actors around refusing vehicular mass surveillance systems without requiring everyone to become experts. Community members organized neighbors and testified at hearings, creating political conditions where elected officials could refuse surveillance and survive politically. Council champions used their institutional authority to exercise "procurement power": the ability to categorically refuse surveillance technology.
To fuel these fights, organizers leveraged technical assets like investigation guides and contract timeline analysis. This technical capacity allowed community members to lead effectively without needing to become policy experts. In Eugene and Springfield, Oregon, Eyes Off Eugene organized sustained opposition over months while providing city council members political cover to refuse. "This is [a] very wonderful and exciting victory," organizer Kamryn Stringfield said. "This only happened due to the organized campaign led by Eyes Off Eugene and other local groups."
Refusal Crosses Political DividesA common misconception collapsed in 2025: that surveillance technology can only be resisted in progressive jurisdictions. San Marcos, Texas let its contract lapse after a 3-3 deadlock, with Council Member Amanda Rodriguez questioning whether the system showed "return on investment." Hays County commissioners in Texas voted to terminate. Small towns like Gig Harbor, Washington rejected proposals before deployment.
As community partners like the Rural Privacy Coalition emphasize, "privacy is a rural value." These victories came from communities with different political cultures but shared recognition that mass surveillance systems weren't worth the cost or risk regardless of zip code.
Communities Learning From Each OtherIn 2025, communities no longer needed to build expertise from scratch—they could access shared investigation guides, learn from victories in neighboring jurisdictions, and connect with organizers who had won similar fights. When Austin canceled its contract, it inspired organizing across Texas. When Illinois Secretary of State's audit revealed illegal data sharing with federal immigration enforcement, Evanston used those findings to terminate 19 cameras.
The combination of different forms of power—institutional authority, community mobilization, and shared intelligence—was a defining feature of this year's most effective campaigns. By bringing these elements together, community coalitions have secured cancellations or rejections in nearly two dozen jurisdictions since February, building the infrastructure to make the next refusal easier and the movement unstoppable.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Fighting to Keep Bad Patents in Check: 2025 in Review
A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges.
Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents.
EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.
Congress Weighed Bills That Would Undo Core SafeguardsThe Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.
Together, these bills would have dismantled much of the progress made over the last decade.
We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.”
EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it.
Patent Office Shifts To An “Era of No”Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.
Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.
2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office.
The October Rulemaking: Making Bad Patents UntouchableThe USPTO’s proposed rule changes would:
- Force defendants to surrender their court defenses if they use IPR—an intense burden for anyone actually facing a lawsuit.
- Make patents effectively unchallengeable after a single prior dispute, even if that challenge was limited, incomplete, or years out of date.
- Block IPR entirely if a district court case is projected to move faster than the Patent Trial and Appeal Board (PTAB).
These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back.
We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it.
In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.
The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.
GitHub emphasized the increased risk and litigation cost for open-source communities.
Twenty-two patent law professors called the proposal unlawful and harmful to innovation.
Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.
Heading Into 2026The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.
But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
The Breachies 2025: The Worst, Weirdest, Most Impactful Data Breaches of the Year
Another year has come and gone, and with it, thousands of data breaches that affect millions of people. The question these days is less, Is my information in a data breach this year? and more How many data breaches had my information in them this year?
Some data breaches are more noteworthy than others. Where one might affect a small number of people and include little useful information, like a name or email address, others might include data ranging from a potential medical diagnosis to specific location information. To catalog and talk about these breaches we created the Breachies, a series of tongue-in-cheek awards, to highlight the most egregious data breaches.
In most cases, if these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data. Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. It has become such a common occurrence that it’s easy to lose track of which breaches affect you, and just assume your information is out there somewhere. Still, a few steps can help protect your information.
With that, let’s get to the awards.
The Winners
- The Say Something Without Saying Anything Award: Mixpanel
- The We Still Told You So Award: Discord
- The Tea for Two Award: Tea Dating Advice and TeaOnHer
- The Just Stop Using Tracking Tech Award: Blue Shield of California
- The Hacker's Hall Pass Award: PowerSchool
- The Worst. Customer. Service. Ever. Award: TransUnion
- The Annual Microsoft Screwed Up Again Award: Microsoft
- The I Didn’t Even Know You Had My Information Award: Gravy Analytics
- The Keeping Up With My Cybertruck Award: Teslamate
- The Disorder in the Courts Award: PACER
- The Only Stalkers Allowed Award: Catwatchful
- The Why We’re Still Stuck on Unique Passwords Award: Plex
- The Uh, Yes, Actually, I Have Been Pwned Award: Troy Hunt’s Mailing List
- (Dis)honorable Mentions
We’ve long warned that apps delivering your personal information to third-parties, even if they aren’t the ad networks directly driving surveillance capitalism, presents risks and a salient target for hackers. The more widespread your data, the more places attackers can go to find it. Mixpanel, a data analytics company which collects information on users of any app which incorporates its SDK, suffered a major breach in November this year. The service has been used by a wide array of companies, including the Ring Doorbell App, which we reported on back in 2020 delivering a trove of information to Mixpanel, and PornHub, which despite not having worked with the company since 2021, had its historical record of paying subscribers breached.
There’s a lot we still don’t know about this data breach, in large part because the announcement about it is so opaque, leaving reporters with unanswered questions about how many were affected, if the hackers demanded a ransom, and if Mixpanel employee accounts utilized standard security best practices. One thing is clear, though: the breach was enough for OpenAI to drop them as a provider, disclosing critical details on the breach in a blog post that Mixpanel’s own announcement conveniently failed to mention.
The worst part is that, as a data analytics company providing libraries which are included in a broad range of apps, we can surmise that the vast majority of people affected by this breach have no direct relationship with Mixpanel, and likely didn’t even know that their devices were delivering data to the company. These people deserve better than vague statements by companies which profit off of (and apparently insufficiently secure) their data.
The We Still Told You So Award: DiscordLast year, AU10TIX won our first The We Told You So Award because as we predicted in 2023, age verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits. Like clockwork, they did. It was our first We Told You So Breachies award, but we knew it wouldn’t be the last.
Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy.
Nonetheless, this year’s winner of The We Still Told You So Breachies Award is the messaging app, Discord — once known mainly for gaming communities, it now hosts more than 200 million monthly active users and is widely used to host fandom and community channels.
In September of this year, much of Discord’s age verification data was breached — including users’ real names, selfies, ID documents, email and physical addresses, phone numbers, IP addresses, and other contact details or messages provided to customer support. In some cases, “limited billing information” was also accessed—including payment type, the last four digits of credit card numbers, and purchase histories.
Technically though, it wasn’t Discord itself that was hacked but their third-party customer support provider — a company called Zendesk—that was compromised, allowing attackers to access Discord’s user data. Either way, it’s Discord users who felt the impact.
The Tea for Two Award: Tea Dating Advice and TeaOnHerSpeaking of age verification, Tea, the dating safety app for women, had a pretty horrible year for data breaches. The app allows users to anonymously share reviews and safety information about their dates with men—helping keep others safe by noting red flags they saw during their date.
Since Tea is aimed at women’s safety and dating advice, the app asks new users to upload a selfie or photo ID to verify their identity and gender to create an account. That’s some pretty sensitive information that the app is asking you to trust it with! Back in July, it was reported that 72,000 images had been leaked from the app, including 13,000 images of photo IDs and 59,000 selfies. These photos were found via an exposed database hosted on Google’s mobile app development platform, Firebase. And if that isn’t bad enough, just a week later a second breach exposed private messages between users, including messages with phone numbers, abortion planning, and discussions about cheating partners. This breach included more than 1.1 million messages from early 2023 all the way to mid-2025, just before the breach was reported. Tea released a statement shortly after, temporarily disabling the chat feature.
But wait, there’s more. A completely different app based on the same idea, but for men, also suffered a data breach. TeaOnHer failed to protect similar sensitive data. In August, TechCrunch discovered that user information — including emails, usernames, and yes, those photo IDs and selfies — was accessible through a publicly available web address. Even worse? TechCrunch also found the email address and password the app’s creator uses to access the admin page.
Breaches like this are one of the reasons that EFF shouts from the rooftops against laws that mandate user verification with an ID or selfie. Every company that collects this information becomes a target for data breaches — and if a breach happens, you can’t just change your face.
The Just Stop Using Tracking Tech Award: Blue Shield of CaliforniaAnother year, another data breach caused by online tracking tools.
In April, Blue Shield of California revealed that it had shared 4.7 million people’s health data with Google by misconfiguring Google Analytics on its website. The data, which may have been used for targeted advertising, included: people’s names, insurance plan details, medical service providers, and patient financial responsibility. The health insurance company shared this information with Google for nearly three years before realizing its mistake.
If this data breach sounds familiar, it’s because it is: last year’s Just Stop Using Tracking Tech award also went to a healthcare company that leaked patient data through tracking code on its website. Tracking tools remain alarmingly common on healthcare websites, even after years of incidents like this one. These tools are marketed as harmless analytics or marketing solutions, but can expose people’s sensitive data to advertisers and data brokers.
EFF’s free Privacy Badger extension can block online trackers, but you shouldn’t need an extension to stop companies from harvesting and monetizing your medical data. We need a strong, federal privacy law and ban on online behavioral advertising to eliminate the incentives driving companies to keep surveilling us online.
The Hacker's Hall Pass Award: PowerSchoolIn December 2024, PowerSchool, the largest provider of student information systems in the U.S., gave hackers access to sensitive student data. The breach compromised personal information of over 60 million students and teachers, including Social Security numbers, medical records, grades, and special education data. Hackers exploited PowerSchool’s weak security—namely, stolen credentials to their internal customer support portal—and gained unfettered access to sensitive data stored by school districts across the country.
PowerSchool failed to implement basic security measures like multi-factor authentication, and the breach affected districts nationwide. In Texas alone, over 880,000 individuals’ data was exposed, prompting the state's attorney general to file a lawsuit, accusing PowerSchool of misleading its customers about security practices. Memphis-Shelby County Schools also filed suit, seeking damages for the breach and the cost of recovery.
While PowerSchool paid hackers an undisclosed sum to prevent data from being published, the company’s failure to protect its users’ data raises serious concerns about the security of K-12 educational systems. Adding to the saga, a Massachusetts student, Matthew Lane, pleaded guilty in October to hacking and extorting PowerSchool for $2.85 million in Bitcoin. Lane faces up to 17 years in prison for cyber extortion and aggravated identity theft, a reminder that not all hackers are faceless shadowy figures — sometimes they’re just a college kid.
The Worst. Customer. Service. Ever. Award: TransUnionCredit reporting giant TransUnion had to notify its customers this year that a hack nabbed the personal information of 4.4 million people. How'd the attackers get in? According to a letter filed with the Maine Attorney General's office obtained by TechCrunch, the problem was a “third-party application serving our U.S. consumer support operations.” That's probably not the kind of support they were looking for.
TransUnion said in a Texas filing that attackers swept up “customers’ names, dates of birth, and Social Security numbers” in the breach, though it was quick to point out in public statements that the hackers did not access credit reports or “core credit data.” While it certainly could have been worse, this breach highlights the many ways that hackers can get their hands on information. Coming in through third-parties, companies that provide software or other services to businesses, is like using an unguarded side door, rather than checking in at the front desk. Companies, particularly those who keep sensitive personal information, should be sure to lock down customer information at all the entry points. After all, their decisions about who they do business with ultimately carry consequences for all of their customers — who have no say in the matter.
The Annual Microsoft Screwed Up Again Award: MicrosoftMicrosoft is a company nobody feels neutral about. Especially in the infosec world. The myriad software vulnerabilities in Windows, Office, and other Microsoft products over the decades has been a source of frustration and also great financial rewards for both attackers and defenders. Yet still, as the saying goes: “nobody ever got fired for buying from Microsoft.” But perhaps, the times, they are a-changing.
In July 2025, it was revealed that a zero-day security vulnerability in Microsoft’s flagship file sharing and collaboration software, SharePoint, had led to the compromise of over 400 organizations, including major corporations and sensitive government agencies such as the National Nuclear Security Administration (NNSA), the federal agency responsible for maintaining and developing the U.S. stockpile of nuclear weapons. The attack was attributed to three different Chinese government linked hacking groups. Amazingly, days after the vulnerability was first reported, there were still thousands of vulnerable self-hosted Sharepoint servers online.
Zero-days happen to tech companies, large and small. It’s nearly impossible to write even moderately complex software that is bug and exploit free, and Microsoft can’t exactly be blamed for having a zero-day in their code. But when one company is the source of so many zero-days consistently for so many years, one must start wondering whether they should put all their eggs (or data) into a basket that company made. Perhaps if Microsoft’s monopolistic practices had been reined in back in the 1990s we wouldn’t be in a position today where Sharepoint is the defacto file sharing software for so many major organizations. And maybe, just maybe, this is further evidence that tech monopolies and centralization of data aren’t just bad for consumer rights, civil liberties, and the economy—but also for cybersecurity.
The Silver Globe Award: Flat Earth Sun, Moon & ZodiacLook, we’ll keep this one short: in October of last year, researchers found security issues in the flat earther app, Flat Earth, Sun, Moon, & Clock. In March of 2025, that breach was confirmed. What’s most notable about this, aside from including a surprising amount of information about gender, name, email addresses and date of birth, is that it also included users’ location info, including latitude and longitude. Huh, interesting.
The I Didn’t Even Know You Had My Information Award: Gravy AnalyticsIn January, hackers claimed they stole millions of people’s location history from a company that never should’ve had it in the first place: location data broker Gravy Analytics. The data included timestamped location coordinates tied to advertising IDs, which can reveal exceptionally sensitive information. In fact, researchers who reviewed the leaked data found it could be used to identify military personnel and gay people in countries where homosexuality is illegal.
The breach of this sensitive data is bad, but Gravy Analytics’s business model of regularly harvesting and selling it is even worse. Despite the fact that most people have never heard of them, Gravy Analytics has managed to collect location information from a billion phones a day. The company has sold this data to other data brokers, makers of police surveillance tools, and the U.S. government.
How did Gravy Analytics get this location information from people’s phones? The data broker industry is notoriously opaque, but this breach may have revealed some of Gravy Analytics’ sources. The leaked data referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. Many of these app developers said they had no relationship with Gravy Analytics. Instead, expert analysis of the data suggests it was harvested through the advertising ecosystem already connected to most apps. This breach provides further evidence that online behavioral advertising fuels the surveillance industry.
Whether or not they get hacked, location data brokers like Gravy Analytics threaten our privacy and security. Follow EFF’s guide to protecting your location data and help us fight for legislation to dismantle the data broker industry.
The Keeping Up With My Cybertruck Award: TeslamateTeslaMate, a tool meant to track Tesla vehicle data (but which is not owned or operated by Tesla itself), has become a cautionary tale about data security. In August, a security researcher found more than 1,300 self-hosted TeslaMate dashboards were exposed online, leaking sensitive information such as vehicle location, speed, charging habits, and even trip details. In essence, your Cybertruck became the star of its own Keeping Up With My Cybertruck reality show, except the audience wasn’t made up of fans interested in your lifestyle, just random people with access to the internet.
TeslaMate describes itself as “that loyal friend who never forgets anything!” — but its lack of proper security measures makes you wish it would. This breach highlights how easily location data can become a tool for harassment or worse, and the growing need for legislation that specifically protects consumer location data. Without stronger regulations around data privacy, sensitive location details like where you live, work, and travel can easily be accessed by malicious actors, leaving consumers with no recourse.
The Disorder in the Courts Award: PACERConfidentiality is a core principle in the practice of law. But this year a breach of confidentiality came from an unexpected source: a breach of the federal court filing system. In August, Politico reported that hackers infiltrated the Case Management/Electronic Case Files (CM/ECF) system, which uses the same database as PACER, a searchable public database for court records. Of particular concern? The possibility that the attack exposed the names of confidential informants involved in federal cases from multiple court districts. Courts across the country acted quickly to set up new processes to avoid the possibility of further compromises.
The leak followed a similar incident in 2021 and came on the heels of a warning to Congress that the file system is more than a little creaky. In fact, an IT official from the federal court system told the House Judiciary Committee that both systems are “unsustainable due to cyber risks, and require replacement.”
The Only Stalkers Allowed Award: CatwatchfulJust like last year, a stalkerware company was subject to a data breach that really should prove once and for all that these companies must be stopped. In this case, Catwatchful is an Android spyware company that sells itself as a “child monitoring app.” Like other products in this category, it’s designed to operate covertly while uploading the contents of a victim’s phone, including photos, messages, and location information.
This data breach was particularly harmful, as it included not just the email addresses and passwords on the customers who purchased the app to install on a victim’s phone, but also the data from the phones of 26,000 victims’ devices, which could include the victims’ photos, messages, and real-time location data.
This was a tough award to decide on because Catwatchful wasn’t the only stalkerware company that was hit this year. Similar breaches to SpyX, Cocospy, and Spyic were all strong contenders. EFF has worked tirelessly to raise the alarm on this sort of software, and this year worked with AV Comparatives to test the stalkerware detection rate on Android of various major antivirus apps.
The Why We’re Still Stuck on Unique Passwords Award: PlexEvery year, we all get a reminder about why using unique passwords for all our accounts is crucial for protecting our online identities. This time around, the award goes to Plex, who experienced a data breach that included customer emails, usernames, and hashed passwords (which is a fancy way of saying passwords are scrambled through an algorithm, but it is possible they could still be deciphered).
If this all sounds vaguely familiar to you for some reason, that’s because a similar issue also happened to Plex in 2022, affecting 15 million users. Whoops.
This is why it is important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. Here’s how to turn that on for your Plex account.
The Uh, Yes, Actually, I Have Been Pwned Award: Troy Hunt’s Mailing ListTroy Hunt, the person behind Have I Been Pwned? and who has more experience with data breaches than just about anyone, also proved that anyone can be pwned. In a blog post, he details what happened to his mailing list:
You know when you're really jet lagged and really tired and the cogs in your head are just moving that little bit too slow? That's me right now, and the penny has just dropped that a Mailchimp phish has grabbed my credentials, logged into my account and exported the mailing list for this blog.
And he continues later:
I'm enormously frustrated with myself for having fallen for this, and I apologise to anyone on that list. Obviously, watch out for spam or further phishes and check back here or via the social channels in the nav bar above for more.
The whole blog is worth a read as a reminder that phishing can get anyone, and we thank Troy Hunt for his feedback on this and other breaches to include this year.
Tips to Protect YourselfData breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.
There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):
- Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
- Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
- Delete old accounts: Sometimes, you’ll get a data breach notification for an account you haven’t used in years. This can be a nice reminder to delete that account, but it’s better to do so before a data breach happens, when possible. Try to make it a habit to go through and delete old accounts once a year or so.
- Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
- Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money.
According to one report, 2025 had already seen 2,563 data breaches by October, which puts the year on track to be one of the worst by the sheer number of breaches.
We did not investigate every one of these 2,500-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachies Award to every company that was breached this year. Still, here are some (dis)honorable mentions we wanted to highlight:
Salesforce, F5, Oracle, WorkComposer, Raw, Stiizy, Ohio Medical Alliance LLC, Hello Cake, Lovense, Kettering Health, LexisNexis, WhatsApp, Nexar, McDonalds, Congressional Budget Office, Doordash, Louis Vuitton, Adidas, Columbia University, Hertz, HCRG Care Group, Lexipol, Color Dating, Workday, Aflac, and Coinbase. And a special nod to last minute entrants Home Depot, 700Credit, and Petco.
What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.
🪪 Age Verification Is Coming for the Internet | EFFector 37.18
The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.
In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.
Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
States Take On Tough Tech Policy Battles: 2025 in Review
State legislatures—from Olympia, WA, to Honolulu, HI, to Tallahassee, FL, and everywhere in between—kept EFF’s state legislative team busy throughout 2025.
We saw some great wins and steps forward this year. Washington became the eighth state to enshrine the right to repair. Several states stepped up to protect the privacy of location data, with bills recognizing your location data isn't just a pin on a map—it's a powerful tool that reveals far more than most people realize. Other state legislators moved to protect health privacy. And California passed a law making it easier for people to exercise their privacy rights under the state’s consumer data privacy law.
Several states also took up debates around how to legislate and regulate artificial intelligence and its many applications. We’ll continue to work with allies in states including California and Colorado to proposals that address the real harms from some uses of AI, without infringing on the rights of creators and individual users.
We’ve also fought some troubling bills in states across the country this year. In April, Florida introduced a bill that would have created a backdoor for law enforcement to have easy access to messages if minors use encrypted platforms. Thankfully, the Florida legislature did not pass the bill this year. But it should set off serious alarm bells for anyone who cares about digital rights. And it was just one of a growing set of bills from states that, even when well-intentioned, threaten to take a wrecking ball to privacy, expression, and security in the name of protecting young people online.
Take, for example, the burgeoning number of age verification, age gating, age assurance, and age estimation bills. Instead of making the internet safer for children, these laws can incentivize or intersect with existing systems that collect vast amounts of data to force all users—regardless of age—to verify their identity just to access basic content or products. South Dakota and Wyoming, for example, are requiring any website that hosts any sexual content to implement age verification measures. But, given the way those laws are written, that definition could include essentially any site that allows user-generated or published content without age-based gatekeeping access. That could include everyday resources such as social media networks, online retailers, and streaming platforms.
Lawmakers, not satisfied with putting age gates on the internet, are also increasingly going after VPNs (virtual private networks) to prevent anyone from circumventing these new digital walls. VPNs are not foolproof tools—and they shouldn’t be necessary to access legally protected speech—but they should be available to people who want to use them. We will continue to stand against these types of bills, not just for the sake of free expression, but to protect the free flow of information essential to a free society.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Lawmakers Must Listen to Young People Before Regulating Their Internet Access: 2025 in Review
State and federal lawmakers have introduced multiple proposals in 2025 to curtail or outright block children and teenagers from accessing legal content on the internet. These lawmakers argue that internet and social media platforms have an obligation to censor or suppress speech that they consider “harmful” to young people. Unfortunately, in many of these legislative debates, lawmakers are not listening to kids, whose experiences online are overwhelmingly more positive than what lawmakers claim.
Fortunately, EFF has spent the past year trying to make sure that lawmakers hear young people’s voices. We have also been reminding lawmakers that minors, like everyone else, have First Amendment rights to express themselves online.
These rights extend to a young person’s ability to use social media both to speak for themselves and access the speech of others online. Young people also have the right to control how they access this speech, including a personalized feed and other digestible and organized ways. Preventing teenagers from accessing the same internet and social media channels that adults use is a clear violation of their right to free expression.
On top of violating minors’ First Amendment rights, these laws also actively harm minors who rely on the internet to find community, find resources to end abuse, or access information about their health. Cutting off internet access acutely harms LGBTQ+ youth and others who lack familial or community support where they live. These laws also empower the state to decide what information is acceptable for all young people, overriding parents’ choices.
Additionally, all of the laws that would attempt to create a “kid friendly” internet and an “adults-only” internet are a threat to everyone, adults included. These mandates encourage an adoption of invasive and dangerous age-verification technology. Beyond creepy, these systems incentivize more data collection, and increase the risk of data breaches and other harms. Requiring everyone online to provide their ID or other proof of their age could block legal adults from accessing lawful speech if they don’t have the right form of ID. Furthermore, this trend infringes on people’s right to be anonymous online, and creates a chilling effect which may deter people from joining certain services or speaking on certain topics
EFF has lobbied against these bills at both the state and federal level, and we have also filed briefs in support of several lawsuits to protect the First Amendment Rights of minors. We will continue to advocate for the rights of everyone online – including minors – in the future.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Trends to Watch in the California Legislature
If you’re a Californian, there are a few new state laws that you should know will be going into effect in the new year. EFF has worked hard in Sacramento this session to advance bills that protect privacy, fight surveillance, and promote transparency.
California’s legislature runs in a two-year cycle, meaning that it’s currently halftime for legislators. As we prepare for the next year of the California legislative session in January, it’s a good time to showcase what’s happened so far—and what’s left to do.
Wins Worth CelebratingIn a win for every Californian’s privacy rights, we were happy to support A.B. 566 (Assemblymember Josh Lowenthal). This is a common-sense law that makes California’s main consumer data privacy law, the California Consumer Privacy Act, more user-friendly. It requires that browsers support people’s rights to send opt-out signals, such as the global opt-out in Privacy Badger, to businesses. Managing your privacy as an individual can be a hard job, and EFF wants stronger laws that make it easier for you to do so.
Additionally, we were proud to advance government transparency by supporting A.B. 1524 (Judiciary Committee), which allows members of the public to make copies of public court records using their own devices, such as cell-phone cameras and overhead document scanners, without paying fees.
We also supported two bills that will improve law enforcement accountability at a time when we desperately need it. S.B. 627 (Senator Scott Wiener) prohibits law enforcement officers from wearing masks to avoid accountability (The Trump administration has sued California over this law). We also supported S.B. 524 (Asm. Jesse Arreguín), which requires law enforcement to disclose when a police report was written using artificial intelligence.
On the To-Do List for Next YearOn the flip side, we also stopped some problematic bills from becoming law. This includes S.B. 690 (Sen. Anna Caballero), which we dubbed the Corporate Coverup Act. This bill would have gutted California’s wiretapping statute by allowing businesses to ignore those privacy rights for “any business purpose.” Working with several coalition partners, we were able to keep that bill from moving forward in 2025. We do expect to see it come back in 2026, and are ready to fight back against those corporate business interests.
And, of course, not every fight ended in victory. There are still many areas where we have work left to do. California Governor Gavin Newsom vetoed a bill we supported, S.B. 7, which would have given workers in California greater transparency into how their employers use artificial intelligence and was sponsored by the California Federation of Labor Unions. S.B. 7 was vetoed in response to concerns from companies including Uber and Lyft, but we expect to continue working with the labor community on the ways AI affects the workplace in 2026.
Trends of NoteCalifornia continued a troubling years-long trend of lawmakers pushing problematic proposals that would require every internet user to verify their age to access information—often by relying on privacy-invasive methods to do so. Earlier this year EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online. We continue to raise these concerns, and would welcome working with any lawmaker in California on a better solution.
We also continue to keep a close eye on government data sharing. On this front, there is some good news. Several of the bills we supported this year sought to place needed safeguards on the ways various government agencies in California share data. These include: A.B. 82 (Asm. Chris Ward) and S.B. 497 (Wiener), which would add privacy protections to data collected by the state about those who may be receiving gender-affirming or reproductive health care; A.B. 1303 (Asm. Avelino Valencia), which prohibits warrantless data sharing from California’s low-income broadband program to immigration and other government officials; and S.B. 635 (Sen. Maria Elena Durazo), which places similar limits on data collected from sidewalk vendors.
We are also heartened to see California correct course on broad government data sharing. Last session, we opposed A.B. 518 (Asm. Buffy Wicks), which let state agencies ignore existing state privacy law to allow broader information sharing about people eligible for CalFresh—the state’s federally funded food assistance program. As we’ve seen, the federal government has since sought data from food assistance programs to use for other purposes. We were happy to have instead supported A.B. 593 this year, also authored by Asm. Wicks—which reversed course on that data sharing.
We hope to see this attention to the harms of careless government data sharing continue. EFF’s sponsored bill this year, A.B. 1337, would update and extend vital privacy safeguards present at the state agency level to counties and cities. These local entities today collect enormous amounts of data and administer programs that weren’t contemplated when the original law was written in 1977. That information should be held to strong privacy standards.
We’ve been fortunate to work with Asm. Chris Ward, who is also the chair of the LGBTQ Caucus in the legislature, on that bill. The bill stalled in the Senate Judiciary Committee during the 2025 legislative session, but we plan to bring it back in the next session with a renewed sense of urgency.
Age Verification Threats Across the Globe: 2025 in Review
Age verification mandates won't magically keep young people safer online, but that has not stopped governments around the world spending this year implementing or attempting to introduce legislation requiring all online users to verify their ages before accessing the digital space.
The UK’s misguided approach to protecting young people online took many headlines due to the reckless and chaotic rollout of the country’s Online Safety Act, but they were not alone: courts in France ruled that porn websites can check users’ ages; the European Commission pushed forward with plans to test its age-verification app; and Australia’s ban on under-16s accessing social media was recently implemented.
Through this wave of age verification bills, politicians are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most.
In response, we’ve spent this year urging governments to pause these legislative initiatives and instead protect everyone’s right to speak and access information online. Here are three ways we pushed back [against these bills] in 2025:
Social Media Bans for Young PeopleBanning a certain user group changes nothing about a platform’s problematic privacy practices, insufficient content moderation, or business models based on the exploitation of people’s attention and data. And assuming that young people will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.
Yet Australia’s government recently decided to ignore these dangers by rolling out a sweeping regime built around age verification that bans users under 16 from having social media accounts. In this world-first ban, platforms are required to introduce age assurance tools to block under-16s, demonstrate that they have taken “reasonable steps” to deactivate accounts used by under-16s, and prevent any new accounts being created or face fines of up to 49.5 million Australian dollars ($32 million USD). The 10 banned platforms—Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X—have each said they’ll comply with the legislation, leading to young people losing access to their accounts overnight.
Similarly, the European Commission this year took a first step towards mandatory age verification that could undermine privacy, expression, and participation rights for young people—rights that have been fully enshrined in international human rights law through its guidelines under Article 28 of the Digital Services Act. EFF submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Mandatory age verification measures are not the right way to protect minors, and any online safety measure for young people must also safeguard their privacy and security. Unfortunately, the EU Parliament already went a step further, proposing an EU digital minimum age of 16 for access to social media, a move that aligns with EU Commission’s president Ursula von der Leyen’s recent public support for measures inspired by Australia’s model.
Push for Age Assurance on All UsersThis year, the UK had a moment—and not a good one. In late July, new rules took effect under the Online Safety Act that now require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.
The UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. As we argued throughout this year, and during the passage of the Online Safety Act, any attempt to protect young people online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. The approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the very young people that it is trying to protect.
We’re seeing these narratives and regulatory initiatives replicated from the UK to U.S. states and other global jurisdictions, and we’ll continue urging politicians not to follow the UK’s lead in passing similar legislation—and to instead explore more holistic approaches to protecting all users online.
Rushed Age Assurance through the EU Digital WalletThere is not yet a legal obligation to verify users’ ages at the EU level, but policymakers and regulators are already embracing harmful age verification and age assessment measures in the name of reducing online harms.
These demands steer the debate toward identity-based solutions, such as the EU Digital Identity Wallet, which will become available in 2026. This has come with its own realm of privacy and security concerns, such as long-term identifiers (which could result in tracking) and over-exposure of personal information. Even more concerning is, instead of waiting for the full launch of the EU DID Wallet, the Commission rushed a “mini AV” app out this year ahead of schedule, citing an urgent need to address concerns about children and the harms that may come to them online.
However, this proposed solution directly tied national ID to an age verification method. This also comes with potential mission creep of what other types of verification could be done in EU member states once this is fully deployed—while the focus of the “mini AV” app is for now on verifying age, its release to the public means that the infrastructure to expand ID checks to other purposes is in place, should the government mandate that expansion in the future.
Without the proper safeguards, this infrastructure could be leveraged inappropriately—all the more reason why lawmakers should explore more holistic approaches to children's safety.
Ways ForwardThe internet is an essential resource for young people and adults to access information, explore community, and find themselves. The issue of online safety is not solved through technology alone, and young people deserve a more intentional approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves.
Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians to look into what is best, and not what is easy; and in the meantime, we’ll continue fighting for the rights of all users on the internet in 2026.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Defending Access to Abortion Information Online: 2025 in Review
As reproductive rights face growing attacks globally, access to content about reproductive healthcare and abortion online has never been more critical. The internet has essential information on topics like where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Reproductive rights activists use the internet to organize and build community, and healthcare providers rely on it to distribute accurate information to people in need. And for those living in one of the 20+ states where abortion is banned or heavily restricted, the internet is often the only place to find these potentially life-saving resources.
Nonetheless, both the government and private platforms are increasingly censoring abortion-related speech, at a time when we need it most. Anti-abortion legislators are actively trying to pass laws to limit online speech about abortion, making it harder to share critical resources, discuss legal options, seek safe care, and advocate for reproductive rights. At the same time, social media platforms have increasingly cracked down on abortion-related content, leading to the suppression, shadow-banning, and outright removal of posts and accounts.
This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.
As defenders of free expression and access to information online, we have a role to play in understanding where and how this is happening, shining a light on practices that endanger these rights, and taking action to ensure they’re protected. This year, we worked tirelessly to fight censorship of abortion-related information online—whether it originated from the largest social media platforms or the largest state in the U.S.
Exposing Social Media CensorshipAt the start of 2025, we launched the #StopCensoringAbortion campaign to collect and spotlight the growing number of stories from users that have had abortion-related content censored by social media platforms. Our goal was to better understand how and why this is happening, raise awareness, and hold the platforms accountable.
Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and influencers around the world, we confirmed what many already suspected: this speech is being removed and restricted by platforms at an alarming rate. Across the submissions we received, we saw a pattern of over enforcement, lack of transparency, and arbitrary moderation decisions aimed at reproductive health and reproductive justice advocates.
Notably, almost none of the submissions we reviewed actually violated the platforms’ stated policies. The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But the content being removed wasn’t selling medications. Most of the censored posts simply provided factual, educational information—content that’s expressly allowed by Meta.
In a month-long 10-part series, we broke down our findings. We examined the trends we saw, including stories of individuals and organizations who needed to rely on internal connections at Meta to get wrongfully censored posts restored, examples of account suspensions without sufficient warnings, and an exploration of Meta policies and how they are wrongly applied. We provided practical tips for users to protect their posts from being removed, and we called on platforms to adopt steps to ensure transparency, a functional appeals process, more human review of posts, and consistent and fair enforcement of rules.
Social media platforms have a First Amendment right to curate the content on their sites—they can remove whatever content they want—and we recognize that. But companies like Meta claim they care about free speech, and their policies explicitly claim to allow educational information and discussions about abortion. We think they have a duty to live up to those promises. Our #StopCensoringAbortion campaign clearly shows that this isn’t happening and underscores the urgent need for platforms to review and consistently enforce their policies fairly and transparently.
Combating Legislative Attacks on Free SpeechOn top of platform censorship, lawmakers are trying to police what people can say and see about abortion online. So in 2025, we also fought against censorship of abortion information on the legislative front.
EFF opposed Texas Senate Bill (S.B.) 2880, which would not only outlaw the sale and distribution of abortion pills, but also make it illegal to “provide information” on how to obtain an abortion-inducing drug. Simply having an online conversation about mifepristone or exchanging emails about it could run afoul of the law.
On top of going after online speakers who create and post content themselves, the bill also targeted social media platforms, websites, email services, messaging apps, and any other “interactive computer service” simply for hosting or making that content available. This was a clear attempt by Texas legislators to keep people from learning about abortion drugs, or even knowing that they exist, by wiping this information from the internet altogether.
We laid out the glaring free-speech issues with S.B. 2880 and explained how the consequences would be dire if passed. And we asked everyone who cares about free speech to urge lawmakers to oppose this bill, and others like it. Fortunately, these concerns were heard, and the bill never became law.
Our team also spent much of the year fighting dangerous age verification legislation, often touted as “child safety” bills, at both the federal and state level. We raised the alarm on how age verification laws pose significant challenges for users trying to access critical content—including vital information about sexual and reproductive health. By age-gating the internet, these laws could result in websites requiring users to submit identification before accessing information about abortion or reproductive healthcare. This undermines the ability to remain private and anonymous while searching for abortion information online.
Protecting Life-Saving Information OnlineAbortion information saves lives, and the internet is a primary (and sometimes only) source where people can access it.
As attacks on abortion information intensify, EFF will continue to fight so that users can post, host, and access abortion-related content without fear of being silenced. We’ll keep pushing for greater accountability from social media platforms and fighting against harmful legislation aimed at censoring these vital resources. The fight is far from over, but we will remain steadfast in ensuring that everyone, regardless of where they live, can access life-saving information and make informed decisions about their health and rights.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
EFF, Open Rights Group, Big Brother Watch, and Index on Censorship Call on UK Government to Repeal Online Safety Act
Since the Online Safety Act took effect in late July, UK internet users have made it very clear to their politicians that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act (OSA) hit over 400,000 signatures.
In the months since, more than 550,000 people have petitioned Parliament to repeal or reform the Online Safety Act, making it one of the largest public expressions of concern about a UK digital law in recent history. The OSA has galvanized swathes of the UK population, and it’s high time for politicians to take that seriously.
Last week, EFF joined Open Rights Group, Big Brother Watch, and Index on Censorship in sending a briefing to UK politicians urging them to listen to their constituents and repeal the Online Safety Act ahead of this week’s Parliamentary petition debate on 15 December.
The legislation is a threat to user privacy, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and effectively blocks millions of people without a personal device or form of ID from accessing the internet. The briefing highlights how, in the months since the OSA came into effect, we have seen the legislation:
- Make it harder for not-for-profits and community groups to run their own websites.
- Result in the wrong types of content being taken down.
- Lead to age-assurance being applied widely to all sorts of content.
Our briefing continues:
“Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.”
The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can coexist.
If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.
Read the briefing in full here.
EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate
The UK Parliament convened earlier this week to debate a petition signed by almost 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.
The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country.
But the case for digital identification has not been made.
As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:
- Mission creep
- Infringements on privacy rights
- Serious security risks
- Reliance on inaccurate and unproven technologies
- Discrimination and exclusion
- The deepening of entrenched power imbalances between the state and the public.
Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.
Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.
Read our civil society briefing in full here.
From Speakeasies to DEF CON—Celebrating With EFF Members: 2025 Year In Review
It’s been a great year to be on EFF’s membership team. There's no better feeling than hanging out with your fellow digital freedom supporters and being able to say, “Oh yeah, and we’re suing the government!” We’ve done that a lot this year—and that’s all thanks to people like you.
As a token of appreciation for supporting EFF’s mission to protect privacy and free expression online for all people, we put a lot of care into meeting the members who make our work possible. Whether it’s hosting meetups, traveling to conferences, or finding new and fun ways to explain what we’re fighting for, connecting with you is always a highlight of the job.
EFF Speakeasy Meet UpsOne of my favorite perks we offer for EFF members is exclusive invites for Speakeasy meet ups. It’s a chance for us to meet the very passionate members who fuel our work!
This year, we hosted Speakeasies across the country while making the rounds at conferences. We met supporters in Mesa, AZ during CactusCon; Pasadena, CA during SCALE; Portland, OR during BSidesPDX; New York, NY during HOPE and BSidesNYC; and Seattle, WA during our panel at the University of Washington.
Of course, we also had to host a Speakeasy in our home court—and for the first time it took place in the South Bay Area in Mountain View, CA at Hacker Dojo! There, members of EFF’s D.C. Legislative team spoke about EFF’s legislative efforts and how they’ll shape digital rights for all. We even recorded that conversation for you to watch on YouTube or the Internet Archive.
And we can’t forget about our global community! Our annual online Speakeasy brought together members around the world for a conversation and Q&A with our friends at Women in Security and Privacy (WISP) about online behavioral tracking and the data broker industry. We heard and answered great questions about pushing back on online tracking and what legislative steps we can take to strengthen privacy.
Summer Security ConferencesSay what you will about Vegas—nothing compares to the energy of seeing thousands of EFF supporters during the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week.
If you’ve ever seen us at a conference, you know the drill: a table full of EFF staff frantically handing out swag, answering questions, and excitedly saying hi to everyone that stops by and supports our work. This year it was especially fun to see how many people brought their Rayhunter devices.
And of course, it wouldn’t be a trip to Vegas without EFF’s annual DEF CON Poker Tournament. This year 48 supporters and friends played for money, glory, and the future of the web—all with EFF’s very own playing cards. For the first time ever, the jellybean trophy went to the same winner two years in a row!
EFFecting Change Livestream SeriesWe ramped up our livestream series, EFFecting Change, this year with a total of six livestreams covering topics including the future of social media with guests from Mastodon, Bluesky, and Spill; EFF’s 35th Anniversary and what’s next in the fight for privacy and free speech online; and generative AI, including how to address the risks of the technology while protecting civil liberties and human rights online.
We’ve got more in store for EFFecting Change in 2026, so be sure to stay up-to-date by signing up for updates!
EFF Awards CeremonyEFF is at the forefront of protecting users from dystopian surveillance and unjust censorship online. But we’re not the only one doing this work, and we couldn’t do it without other organizations in the space. So, every year we like to award those who are courageously championing the digital rights movement.
This year we gave out three awards: the EFF Award for Defending Digital Freedoms went to Software Freedom Law Center, India, the EFF Award for Protecting Americans’ Data went to Erie Meyer, and the EFF Award for Leading Immigration and Surveillance Litigation went to Just Futures Law. You can watch the EFF Awards here and see photos from the event too!
That doesn’t even cover all of it! We even got to celebrate 35 years of EFF in July with limited-edition challenge coins and all-new member swag—plus a livestream covering EFF’s history and what’s next for us.
Grab EFF's 35th Anniversary t-shirt when you become a member today!
As the new year approaches, I always like to look back on the bright spots—especially the joy of hanging out with this incredible community. The world can feel hectic, but connecting with supporters like you is a reminder of how much good we can build when we work together.
Many thanks to all of the EFF members who joined forces with us this year. If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Thousands Tell the Patent Office: Don’t Hide Bad Patents From Review
A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.
EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted. The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.
The Public Doesn’t Want To Bury Patent ChallengesThese thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment.
Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy.
The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.
EFF’s Comment To USPTOIn our filing, we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.
Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward.
A Broad Coalition Supports IPRA wide range of groups told the USPTO the same thing: don’t cut off access to IPR.
Open Source and Developer Communities
The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work.
Patent Law Scholars
A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.”
Patient Advocates
Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.”
Small Businesses
Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court.
What Happens NextThe USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.
Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters.
We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve the public’s ability to challenge bad patents.
Artificial Intelligence, Copyright, and the Fight for User Rights: 2025 in Review
A tidal wave of copyright lawsuits against AI developers threatens beneficial uses of AI, like creative expression, legal research, and scientific advancement. How courts decide these cases will profoundly shape the future of this technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly. As these cases finished their trials and moved to appeals courts in 2025, EFF intervened to defend fair use, promote competition, and protect everyone’s rights to build and benefit from this technology.
At the same time, rightsholders stepped up their efforts to control fair uses through everything from state AI laws to technical standards that influence how the web functions. In 2025, EFF fought policies that threaten the open web in the California State Legislature, the Internet Engineering Task Force, and beyond.
Fair Use Still Protects Learning—Even by MachinesCopyright lawsuits against AI developers often follow a similar pattern: plaintiffs argue that use of their works to train the models was infringement and then developers counter that their training is fair use. While legal theories vary, the core issue in many of these cases is whether using copyrighted works to train AI is a fair use.
We think that it is. Courts have long recognized that copying works for analysis, indexing, or search is a classic fair use. That principle doesn’t change because a statistical model is doing the reading. AI training is a legitimate, transformative fair use, not a substitute for the original works.
More importantly, expanding copyright would do more harm than good: while creators have legitimate concerns about AI, expanding copyright won’t protect jobs from automation. But overbroad licensing requirements risk entrenching Big Tech’s dominance, shutting out small developers, and undermining fair use protections for researchers and artists. Copyright is a tool that gives the most powerful companies even more control—not a check on Big Tech. And attacking the models and their outputs by attacking training—i.e. “learning” from existing works—is a dangerous move. It risks a core principle of freedom of expression: that training and learning—by anyone—should not be endangered by restrictive rightsholders.
In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies, but in 2025, things began to speed up.
But some cases have already reached courts of appeal. We advocated for fair use rights and sensible limits on copyright in amicus briefs filed in Doe v. GitHub, Thomson Reuters v. Ross Intelligence, and Bartz v. Anthropic, three early AI copyright appeals that could shape copyright law and influence dozens of other cases. We also filed an amicus brief in Kadrey v. Meta, one of the first decisions on the merits of the fair use defense in an AI copyright case.
How the courts decide the fair use questions in these cases could profoundly shape the future of AI—and whether legacy gatekeepers will have the power to control it. As these cases move forward, EFF will continue to defend your fair use rights.
Protecting the Open Web in the IETFRightsholders also tried to make an end-run around fair use by changing the technical standards that shape much of the internet. The IETF, an Internet standards body, has been developing technical standards that pose a major threat to the open web. These proposals would give websites to express “preference signals” against certain uses of scraped data—effectively giving them veto power over fair uses like AI training and web search.
Overly restrictive preference signaling threatens a wide range of important uses—from accessibility tools for people with disabilities to research efforts aimed at holding governments accountable. Worse, the IETF is dominated by publishers and tech companies seeking to embed their business models into the infrastructure of the internet. These companies aren’t looking out for the billions of internet users who rely on the open web.
That’s where EFF comes in. We advocated for users’ interests in the IETF, and helped defeat the most dangerous aspects of these proposals—at least for now.
Looking AheadThe AI copyright battles of 2025 were never just about compensation—they were about control. EFF will continue working in courts, legislatures, and standards bodies to protect creativity and innovation from copyright maximalists.
Why Isn’t Online Age Verification Just Like Showing Your ID In Person?
This blog also appears in our Age Verification Resource Hub: our one-stop shop for users seeking to understand what age-gating laws actually do, what’s at stake, how to protect yourself, and why EFF opposes all forms of age verification mandates. Head to EFF.org/Age to explore our resources and join us in the fight for a free, open, private, and yes—safe—internet.
One of the most common refrains we hear from age verification proponents is that online ID checks are nothing new. After all, you show your ID at bars and liquor stores all the time, right? And it’s true that many places age-restrict access in-person to various goods and services, such as tobacco, alcohol, firearms, lottery tickets, and even tattoos and body piercings.
But this comparison falls apart under scrutiny. There are fundamental differences between flashing your ID to a bartender and uploading government documents or biometric data to websites and third-party verification companies. Online age-gating is more invasive, affects far more people, and poses serious risks to privacy, security, and free speech that simply don't exist when you buy a six-pack at the corner store.
Online age verification burdens many more people.Online age restrictions are imposed on many, many more users than in-person ID checks. Because of the sheer scale of the internet, regulations affecting online content sweep in an enormous number of adults and youth alike, forcing them to disclose sensitive personal data just to access lawful speech, information, and services.
Additionally, age restrictions in the physical world affect only a limited number of transactions: those involving a narrow set of age-restricted products or services. Typically this entails a bounded interaction about one specific purchase.
Online age verification laws, on the other hand, target a broad range of internet activities and general purpose platforms and services, including social media sites and app stores. And these laws don’t just wall off specific content deemed harmful to minors (like a bookstore would); they age-gate access to websites wholesale. This is akin to requiring ID every time a customer walks into a convenience store, regardless of whether they want to buy candy or alcohol.
There are significant privacy and security risks that don’t exist offline.In offline, in-person scenarios, a customer typically provides their physical ID to a cashier or clerk directly. Oftentimes, customers need only flash their ID for a quick visual check, and no personal information is uploaded to the internet, transferred to a third-party vendor, or stored. Online age-gating, on the other hand, forces users to upload—not just momentarily display—sensitive personal information to a website in order to gain access to age-restricted content.
This creates a cascade of privacy and security problems that don’t exist in the physical world. Once sensitive information like a government-issued ID is uploaded to a website or third-party service, there is no guarantee it will be handled securely. You have no direct control over who receives and stores your personal data, where it is sent, or how it may be accessed, used, or leaked outside the immediate verification process.
Data submitted online rarely just stays between you and one other party. All online data is transmitted through a host of third-party intermediaries, and almost all websites and services also host a network of dozens of private, third-party trackers managed by data brokers, advertisers, and other companies that are constantly collecting data about your browsing activity. The data is shared with or sold to additional third parties and used to target behavioral advertisements. Age verification tools also often rely on third parties just to complete a transaction: a single instance of ID verification might involve two or three different third-party partners, and age estimation services often work directly with data brokers to offer a complete product. Users’ personal identifying data then circulates among these partners.
All of this increases the likelihood that your data will leak or be misused. Unfortunately, data breaches are an endemic part of modern life, and the sensitive, often immutable, personal data required for age verification is just as susceptible to being breached as any other online data. Age verification companies can be—and already have been—hacked. Once that personal data gets into the wrong hands, victims are vulnerable to targeted attacks both online and off, including fraud and identity theft.
Troublingly, many age verification laws don’t even protect user security by providing a private right of action to sue a company if personal data is breached or misused. This leaves you without a direct remedy should something bad happen.
Some proponents claim that age estimation is a privacy-preserving alternative to ID-based verification. But age estimation tools still require biometric data collection, often demanding users submit a photo or video of their face to access a site. And again, once submitted, there’s no way for you to verify how that data is processed or stored. Requiring face scans also normalizes pervasive biometric surveillance and creates infrastructure that could easily be repurposed for more invasive tracking. Once we’ve accepted that accessing lawful speech requires submitting our faces for scanning, we’ve crossed a threshold that’s difficult to walk back.
Online age verification creates even bigger barriers to access.Online age gates create more substantial access barriers than in-person ID checks do. For those concerned about privacy and security, there is no online analog to a quick visual check of your physical ID. Users may be justifiably discouraged from accessing age-gated websites if doing so means uploading personal data and creating a potentially lasting record of their visit to that site.
Given these risks, age verification also imposes barriers to remaining anonymous that don't typically exist in-person. Anonymity can be essential for those wishing to access sensitive, personal, or stigmatized content online. And users have a right to anonymity, which is “an aspect of the freedom of speech protected by the First Amendment.” Even if a law requires data deletion, users must still be confident that every website and online service with access to their data will, in fact, delete it—something that is in no way guaranteed.
In-person ID checks are additionally less likely to wrongfully exclude people due to errors. Online systems that rely on facial scans are often incorrect, especially when applied to users near the legal age of adulthood. These tools are also less accurate for people with Black, Asian, Indigenous, and Southeast Asian backgrounds, for users with disabilities, and for transgender individuals. This leads to discriminatory outcomes and exacerbates harm to already marginalized communities. And while in-person shoppers can speak with a store clerk if issues arise, these online systems often rely on AI models, leaving users who are incorrectly flagged as minors with little recourse to challenge the decision.
In-person interactions may also be less burdensome for adults who don’t have up-to-date ID. An older adult who forgets their ID at home or lacks current identification is not likely to face the same difficulty accessing material in a physical store, since there are usually distinguishing physical differences between young adults and those older than 35. A visual check is often enough. This matters, as a significant portion of the U.S. population does not have access to up-to-date government-issued IDs. This disproportionately affects Black Americans, Hispanic Americans, immigrants, and individuals with disabilities, who are less likely to possess the necessary identification.
We’re talking about First Amendment-protected speech.It's important not to lose sight of what’s at stake here. The good or service age gated by these laws isn’t alcohol or cigarettes—it’s First Amendment-protected speech. Whether the target is social media platforms or any other online forum for expression, age verification blocks access to constitutionally-protected content.
Access to many of these online services is also necessary to participate in the modern economy. While those without ID may function just fine without being able to purchase luxury products like alcohol or tobacco, requiring ID to participate in basic communication technology significantly hinders people’s ability to engage in economic and social life.
This is why it’s wrong to claim online age verification is equivalent to showing ID at a bar or store. This argument handwaves away genuine harms to privacy and security, dismisses barriers to access that will lock millions out of online spaces, and ignores how these systems threaten free expression. Ignoring these threats won’t protect children, but it will compromise our rights and safety.
Age Verification Is Coming For the Internet. We Built You a Resource Hub to Fight Back.
Age verification laws are proliferating fast across the United States and around the world, creating a dangerous and confusing tangle of rules about what we’re all allowed to see and do online. Though these mandates claim to protect children, in practice they create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.
The term “age verification” is colloquially used to describe a wide range of age assurance technologies, from age verification systems that force you to upload government ID, to age estimation tools that scan your face, to systems that infer your age by making you share personal data. While different laws call for different methods, one thing remains constant: every method out there collects your sensitive, personal information and creates barriers to accessing the internet. We refer to all of these requirements as age verification, age assurance, or age-gating.
If you’re feeling overwhelmed by this onslaught of laws and the invasive technologies behind them, you’re not alone. It’s a lot. But understanding how these mandates work and who they harm is critical to keeping yourself and your loved ones safe online. Age verification is lurking around every corner these days, so we must fight back to protect the internet that we know and love.
That’s why today, we’re launching EFF’s Age Verification Resource Hub (EFF.org/Age): a one-stop shop to understand what these laws actually do, what’s at stake, why EFF opposes all forms of age verification, how to protect yourself, and how to join the fight for a free, open, private, and yes—safe—internet.
Why Age Verification Mandates Are a ProblemIn the U.S., more than half of all states have now passed laws imposing age-verification requirements on online platforms. Congress is considering even more at the federal level, with a recent House hearing weighing nineteen distinct proposals relating to young people’s online safety—some sweeping, some contradictory, and each one more drastic and draconian than the last.
We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is.
The rest of the world is moving in the same direction. We saw the UK’s Online Safety Act go into effect this summer, Australia’s new law barring access to social media for anyone under 16 goes live today, and a slew of other countries are currently considering similar restrictions.
We all want young people to be safe online. However, age verification is not the silver bullet that lawmakers want you to think it is. In fact, age-gating mandates will do more harm than good—especially for the young people they claim to protect. They undermine the fundamental speech rights of adults and young people alike; create new barriers to accessing vibrant, lawful, even life-saving content; and needlessly jeopardize all internet users’ privacy, anonymity, and security.
If legislators want to meaningfully improve online safety, they should pass a strong, comprehensive federal privacy law instead of building new systems of surveillance, censorship, and exclusion.
What’s Inside the Resource HubOur new hub is built to answer the questions we hear from users every day, such as:
- How do age verification laws actually work?
- What’s the difference between age verification, age estimation, age assurance, and all the other confusing technical terms I’m hearing?
- What’s at stake for me, and who else is harmed by these systems?
- How can I keep myself, my family, and my community safe as these laws continue to roll out?
- What can I do to fight back?
- And if not age verification, what else can we do to protect the online safety of our young people?
Head over to EFF.org/Age to explore our explainers, user-friendly guides, technical breakdowns, and advocacy tools—all indexed in the sidebar for easy browsing. And today is just the start, so keep checking back over the next several weeks as we continue to build out the site with new resources and answers to more of your questions on all things age verification.
Join Us: Reddit AMA & EFFecting Change Livestream EventsTo celebrate the launch of EFF.org/Age, and to hear directly from you how we can be most helpful in this fight, we’re hosting two exciting events:
1. Reddit AMA on r/privacyNext week, our team of EFF activists, technologists, and lawyers will be hanging out over on Reddit’s r/privacy subreddit to directly answer your questions on all things age verification. We’re looking forward to connecting with you and hearing how we can help you navigate these changing tides, so come on over to r/privacy on Monday (12/15), Tuesday (12/16), and Wednesday (12/17), and ask us anything!
2. EFFecting Change Livestream Panel: “The Human Cost of Online Age Verification”Then, on January 15th at 12pm PT, we’re hosting a livestream panel featuring Cynthia Conti-Cook, Director of Research and Policy at the Collaborative Research Center for Resilience; Hana Memon, Software Developer at Gen Z for Change; EFF Director of Engineering Alexis Hancock; and EFF Associate Director of State Affairs Rindala Alajaji. We’ll break down how these laws work, who they exclude, and how these mandates threaten privacy and free expression for people of all ages. Join us by RSVPing at https://livestream.eff.org/.
A Resource to Empower UsersAge-verification mandates are reshaping the internet in ways that are invasive, dangerous, and deeply unnecessary. But users are not powerless! We can challenge these laws, protect our digital rights, and build a safer digital world for all internet users, no matter their ages. Our new resource hub is here to help—so explore, share, and join us in the fight for a better internet.
The Best Big Media Merger Is No Merger at All
The state of streaming is... bad. It’s very bad. The first step in wanting to watch anything is a web search: “Where can I stream X?” Then you have to scroll past an AI summary with no answers, and then scroll past the sponsored links. After that, you find out that the thing you want to watch was made by a studio that doesn’t exist anymore or doesn’t have a streaming service. So, even though you subscribe to more streaming services than you could actually name, you will have to buy a digital copy to watch. A copy that, despite paying for it specifically, you do not actually own and might vanish in a few years.
Then, after you paid to see something multiple times in multiple ways (theater ticket, VHS tape, DVD, etc.), the mega-corporations behind this nightmare will try to get Congress to pass laws to ensure you keep paying them. In the end, this is easier than making a product that works. Or, as someone put it on social media, these companies have forgotten “that their entire existence relies on being slightly more convenient than piracy.”
It’s important to recognize this as we see more and more media mergers. These mergers are not about quality, they’re about control.
In the old days, studios made a TV show. If the show was a hit, they increased how much they charged companies to place ads during the show. And if the show was a hit for long enough, they sold syndication rights to another channel. Then people could discover the show again, and maybe come back to watch it air live. In that model, the goal was to spread access to a program as much as possible to increase viewership and the number of revenue streams.
Now, in the digital age, studios have picked up a Silicon Valley trait: putting all their eggs into the basket of “increasing the number of users.” To do that, they have to create scarcity. There has to be only one destination for the thing you’re looking for, and it has to be their own. And you shouldn’t be able to control the experience at all. They should.
They’ve also moved away from creating buzzy new exclusives to get you to pay them. That requires risk and also, you know, paying creative people to make them. Instead, they’re consolidating.
Media companies keep announcing mergers and acquisitions. They’ve been doing it for a long time, but it’s really ramped up in the last few years. And these mergers are bad for all the obvious reasons. There are the speech and censorship reasons that came to a head in, of all places, late night television. There are the labor issues. There are the concentration of power issues. There are the obvious problems that the fewer studios that exist the fewer chances good art gets to escape Hollywood and make it to our eyes and ears. But when it comes specifically to digital life there are these: consumer experience and ownership.
First, the more content that comes under a single corporation’s control, the more they expect you to come to them for it. And the more they want to charge. And because there is less competition, the less they need to work to make their streaming app usable. They then enforce their hegemony by using the draconian copyright restrictions they’ve lobbied for to cripple smaller competitors, critics, and fair use.
When everything is either Disney or NBCUniversal or Warner Brothers-Discovery-Paramount-CBS and everything is totally siloed, what need will they have to spend money improving any part of their product? Making things is hard, stopping others from proving how bad you are is easy, thanks to how broken copyright law is.
Furthermore, because every company is chasing increasing subscriber numbers instead of multiple revenue streams, they have an interest in preventing you from ever again “owning” a copy of a work. This was always sort of part of the business plan, but it was on a scale of a) once every couple of years, b) at least it came, in theory, with some new features or enhanced quality and c) you actually owned the copy you paid for. Now they want you to pay them every month for access to same copy. And, hey, the price is going to keep going up the fewer options you have. Or you will see more ads. Or start seeing ads where there weren’t any before.
On the one hand, the increasing dependence on direct subscriber numbers does give users back some power. Jimmy Kimmel’s reinstatement by ABC was partly due to the fact that the company was about to announce a price hike for Disney+ and it couldn’t handle losing users due to the new price and due to popular outrage over Kimmel’s treatment.
On the other hand, well, there's everything else.
The latest kerfuffle is over the sale of Warner Brothers-Discovery, a company that was already the subject of a sale and merger resulting in the hyphen. Netflix was competiing against another recently merged media megazord of Paramount Skydance.
Warner Brothers-Discovery accepted a bid from Netflix, enraging Paramount Skydance, which has now launched a hostile takeover.
Now the optimum outcome is for neither of these takeovers to happen. There are already too few players in Hollywood. It does nothing for the health of the industry to allow either merger. A functioning antitrust regime would stop both the sale and the hostile takeover attempt, full stop. But Hollywood and the federal government are frequent collaborators, and the feds have little incentive to stop Hollywood’s behemoths from growing even further, as long as they continue to play their role as propagandists for the American empire.
The promise of the digital era was in part convenience. You never again had to look at TV listings to find out when something would be airing. Virtually unlimited digital storage meant everything would be at your fingertips. But then the corporations went to work to make sure it never happened. And with each and every merger, that promise gets further and further away.
