EFF: Updates
EFF to Court: Don’t Make Embedding Illegal
Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.
The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.
But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.
The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.
Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.
Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.
Related Cases: Emmerich Newspapers v. Particle MediaEFF to Court: Don’t Make Embedding Illegal
Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.
The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.
But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.
The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.
Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.
Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.
Related Cases: Emmerich Newspapers v. Particle MediaNational Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’
SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour.
In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.
The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day.
The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.
Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20.
The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco.
Proceeds from sales of the book benefit EFF.
“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election
“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change
“Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record
For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/
For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party
For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender
Contact: KarenGulloSenior Writer for Free Speech and Privacykaren@eff.orgNational Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’
SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour.
In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.
The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day.
The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.
Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20.
The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco.
Proceeds from sales of the book benefit EFF.
“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election
“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change
“Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record
For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/
For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party
For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender
Contact: KarenGulloSenior Writer for Free Speech and Privacykaren@eff.orgVictory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data
In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.
The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.
The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.
In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.
It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.
Victory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data
In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.
The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.
The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.
In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.
It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.
☺️ Trust Us With Your Face | EFFector 38.4
Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.
Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.4 - ☺️ Trust Us With Your Face
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!
☺️ Trust Us With Your Face | EFFector 38.4
Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.
Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.4 - ☺️ Trust Us With Your Face
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!
How to Pick Your Password Manager
Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.
Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.
In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.
There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.
Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.
How to Pick Your Password Manager
Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.
Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.
In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.
There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.
Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.
Tech Companies Shouldn’t Be Bullied Into Doing Surveillance
The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”
Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.
In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.
Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.
Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons.
Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.
Tech Companies Shouldn’t Be Bullied Into Doing Surveillance
The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”
Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.
In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.
Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.
Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons.
Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.
EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.
LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.
It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.
Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.
EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.
EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.
LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.
It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.
Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.
EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.
EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea
Update, February 25, 2026: In response to widespread pushback, Wisconsin lawmakers have removed the provision banning VPN services from S.B. 130 / A.B. 105. The bill now awaits Governor Tony Evers’ signature. While the removal of the VPN provision is a positive step, EFF continues to oppose the bill. Advocates and residents across Wisconsin are urged to maintain pressure and encourage Governor Evers to veto the bill.
Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea.
It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow.
In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill.
You can read the full letter here.
The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit.
As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability.
The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach.
If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet.
As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea
Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea.
It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow.
In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill.
You can read the full letter here.
The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit.
As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability.
The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach.
If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet.
As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
San Jose Can Protect Immigrants by Ending Flock Surveillance System
(This appeared as an op-ed published February 12, 2026 in the San Jose Spotlight, written by Huy Tran (SIREN), Jeffrey Wang (CAIR-SFBA), and Jennifer Pinsof.)
As ICE and other federal agencies continue their assault on civil liberties, local leaders are stepping up to protect their communities. This includes pushing back against automated license plate readers, or ALPRs, which are tools of mass surveillance that can be weaponized against immigrants, political dissidents and other targets.
In recent weeks, Mountain View, Los Altos Hills, Santa Cruz, East Palo Alto and Santa Clara County have begun reconsidering their ALPR programs. San Jose should join them. This dangerous technology poses an unacceptable risk to the safety of immigrants and other vulnerable populations.
ALPRs are marketed to promote public safety. But their utility is debatable and they come with significant drawbacks. They don’t just track “criminals.” They track everyone, all the time. Your vehicle’s movements can reveal where you work, worship and obtain medical care. ALPR vendors like Flock Safety put the location information of millions of drivers into databases, allowing anyone with access to instantly reconstruct the public’s movements.
But “anyone with access” is far broader than just local police. Some California law enforcement agencies have used ALPR networks to run searches related to immigration enforcement. In other situations, purported issues with the system’s software have enabled federal agencies to directly access California ALPR data. This is despite the promises of ALPR vendors and clear legal prohibitions.
Communities are saying enough is enough. Just last week, police in Mountain View decided to turn off all of the city’s Flock cameras, following revelations that federal and other unauthorized agencies had accessed their network. The cameras will remain inactive until the City Council provides further direction.
Other localities have shut off the cameras for good. In January, Los Altos Hills terminated its contract with Flock following concerns about ICE. Santa Cruz severed relations with Flock, citing rising tensions with ICE. Most recently, East Palo Alto and Santa Clara County are reconsidering whether to continue their relationships with Flock, given heightened concern for the safety of immigrant communities.
California law prohibits local police from disclosing ALPR data to out-of-state or federal agencies. But at least 75 California police agencies were sharing these records out-of-state as recently as 2023. Just last year, San Francisco police allowed access to out-of-state agencies and 19 searches were related to ICE.
Even without direct access, ICE can exploit local ALPR systems. One investigation found more than 4,000 cases where police had made searches on behalf of federal law enforcement, including for immigration investigations.
Increasing the risk is that law enforcement routinely searches these networks without first obtaining a warrant. In San Jose, police aren’t required to have any suspicion of wrongdoing before searching ALPR databases, which contain a year’s worth of data representing hundreds of millions of records. In a little over a year, San Jose police logged more than 261,000 ALPR searches, or nearly 700 searches a day, all without a warrant.
Two nonprofit organizations, SIREN and CAIR California, represented by Electronic Frontier Foundation and the ACLU of Northern California, are currently suing to stop San Jose’s warrantless searches of ALPR data. But this is only the first step. A better solution is to simply turn these cameras off.
San Jose cannot afford delay. Each day these cameras remain active, they collect sensitive location data that can be misused to target immigrant families and violate fundamental freedoms. It is a risk materializing across California. City leaders must act now to shut down ALPR systems and make clear that public safety will not come at the expense of privacy, human dignity or community trust.
Related Cases: SIREN and CAIR-CA v. San JoseSan Jose Can Protect Immigrants by Ending Flock Surveillance System
(This appeared as an op-ed published February 12, 2026 in the San Jose Spotlight, written by Huy Tran (SIREN), Jeffrey Wang (CAIR-SFBA), and Jennifer Pinsof.)
As ICE and other federal agencies continue their assault on civil liberties, local leaders are stepping up to protect their communities. This includes pushing back against automated license plate readers, or ALPRs, which are tools of mass surveillance that can be weaponized against immigrants, political dissidents and other targets.
In recent weeks, Mountain View, Los Altos Hills, Santa Cruz, East Palo Alto and Santa Clara County have begun reconsidering their ALPR programs. San Jose should join them. This dangerous technology poses an unacceptable risk to the safety of immigrants and other vulnerable populations.
ALPRs are marketed to promote public safety. But their utility is debatable and they come with significant drawbacks. They don’t just track “criminals.” They track everyone, all the time. Your vehicle’s movements can reveal where you work, worship and obtain medical care. ALPR vendors like Flock Safety put the location information of millions of drivers into databases, allowing anyone with access to instantly reconstruct the public’s movements.
But “anyone with access” is far broader than just local police. Some California law enforcement agencies have used ALPR networks to run searches related to immigration enforcement. In other situations, purported issues with the system’s software have enabled federal agencies to directly access California ALPR data. This is despite the promises of ALPR vendors and clear legal prohibitions.
Communities are saying enough is enough. Just last week, police in Mountain View decided to turn off all of the city’s Flock cameras, following revelations that federal and other unauthorized agencies had accessed their network. The cameras will remain inactive until the City Council provides further direction.
Other localities have shut off the cameras for good. In January, Los Altos Hills terminated its contract with Flock following concerns about ICE. Santa Cruz severed relations with Flock, citing rising tensions with ICE. Most recently, East Palo Alto and Santa Clara County are reconsidering whether to continue their relationships with Flock, given heightened concern for the safety of immigrant communities.
California law prohibits local police from disclosing ALPR data to out-of-state or federal agencies. But at least 75 California police agencies were sharing these records out-of-state as recently as 2023. Just last year, San Francisco police allowed access to out-of-state agencies and 19 searches were related to ICE.
Even without direct access, ICE can exploit local ALPR systems. One investigation found more than 4,000 cases where police had made searches on behalf of federal law enforcement, including for immigration investigations.
Increasing the risk is that law enforcement routinely searches these networks without first obtaining a warrant. In San Jose, police aren’t required to have any suspicion of wrongdoing before searching ALPR databases, which contain a year’s worth of data representing hundreds of millions of records. In a little over a year, San Jose police logged more than 261,000 ALPR searches, or nearly 700 searches a day, all without a warrant.
Two nonprofit organizations, SIREN and CAIR California, represented by Electronic Frontier Foundation and the ACLU of Northern California, are currently suing to stop San Jose’s warrantless searches of ALPR data. But this is only the first step. A better solution is to simply turn these cameras off.
San Jose cannot afford delay. Each day these cameras remain active, they collect sensitive location data that can be misused to target immigrant families and violate fundamental freedoms. It is a risk materializing across California. City leaders must act now to shut down ALPR systems and make clear that public safety will not come at the expense of privacy, human dignity or community trust.
Related Cases: SIREN and CAIR-CA v. San JoseNew Report Helps Journalists Dig Deeper Into Police Surveillance Technology
SAN FRANCISCO — A new report released today offers journalists tips on cutting through the sales hype about police surveillance technology and report accurately on costs, benefits, privacy, and accountability as these invasive and often ineffective tools come to communities across the nation.
The “Selling Safety” report is a joint project of the Electronic Frontier Foundation (EFF), the Center for Just Journalism (CJJ), and IPVM.
Police technology is often sold as a silver bullet: a way to modernize departments, make communities safer, and eliminate human bias from policing with algorithmic objectivity. Behind the slick marketing is a sprawling, under-scrutinized industry that relies on manufacturing the appearance of effectiveness, not measuring it. The cost of blindly deferring to advertising can be high in tax dollars, privacy, and civil liberties.
“Selling Safety” helps journalists see through the spin. It breaks down how policing technology companies market their tools, and how those sales claims — which are often misleading — get recycled into media coverage. It offers tools for asking better questions, understanding incentives, and finding local accountability stories.
“The industry that provides technology to law enforcement is one of the most unregulated, unexamined, and consequential in the United States,” said EFF Senior Policy Analyst Matthew Guariglia. “Most Americans would rightfully be horrified to know how many decisions about policing are made: not by public employees, but by multi-billion-dollar surveillance tech companies who have an insatiable profit motive to market their technology as the silver bullet that will stop crime. Lawmakers often are too eager to seem ‘tough on crime’ and journalists too often see an easy story in publishing law enforcement press releases about new technology. This report offers a glimpse into how the police-tech sausage gets made so reporters and lawmakers can recognize the tactics of glossy marketing pitches, manufactured effectiveness numbers, and chumminess between companies and police.”
“Surveillance and other police technologies are spreading faster than public understanding or oversight, leaving journalists to do critical accountability work in real time. We hope this report helps make that work easier,” said Hannah Riley Fernandez, CJJ’s Director of Programming.
"The surveillance technology industry has a documented pattern of making unsubstantiated claims about technology,” said Conor Healy, IPVM's Director of Government Research. “Marketing is not a substitute for evidence. Journalists who go beyond press releases to critically examine vendor claims will often find solutions are not as magical as they may seem. In doing so, they perform essential accountability work that protects both taxpayer dollars and civil liberties."
EFF also maintains resources for understanding various police technologies and mapping those technologies in communities across the United States.
For the “Selling Safety” report: https://www.eff.org/document/selling-safety-journalists-guide-covering-police-technology
For EFF’s Street-Level Surveillance hub: https://sls.eff.org/
For EFF’s Atlas of Surveillance: https://www.atlasofsurveillance.org/
Contact: BerylLiptonSenior Investigative Researcherberyl@eff.orgNew Report Helps Journalists Dig Deeper Into Police Surveillance Technology
SAN FRANCISCO — A new report released today offers journalists tips on cutting through the sales hype about police surveillance technology and report accurately on costs, benefits, privacy, and accountability as these invasive and often ineffective tools come to communities across the nation.
The “Selling Safety” report is a joint project of the Electronic Frontier Foundation (EFF), the Center for Just Journalism (CJJ), and IPVM.
Police technology is often sold as a silver bullet: a way to modernize departments, make communities safer, and eliminate human bias from policing with algorithmic objectivity. Behind the slick marketing is a sprawling, under-scrutinized industry that relies on manufacturing the appearance of effectiveness, not measuring it. The cost of blindly deferring to advertising can be high in tax dollars, privacy, and civil liberties.
“Selling Safety” helps journalists see through the spin. It breaks down how policing technology companies market their tools, and how those sales claims — which are often misleading — get recycled into media coverage. It offers tools for asking better questions, understanding incentives, and finding local accountability stories.
“The industry that provides technology to law enforcement is one of the most unregulated, unexamined, and consequential in the United States,” said EFF Senior Policy Analyst Matthew Guariglia. “Most Americans would rightfully be horrified to know how many decisions about policing are made: not by public employees, but by multi-billion-dollar surveillance tech companies who have an insatiable profit motive to market their technology as the silver bullet that will stop crime. Lawmakers often are too eager to seem ‘tough on crime’ and journalists too often see an easy story in publishing law enforcement press releases about new technology. This report offers a glimpse into how the police-tech sausage gets made so reporters and lawmakers can recognize the tactics of glossy marketing pitches, manufactured effectiveness numbers, and chumminess between companies and police.”
“Surveillance and other police technologies are spreading faster than public understanding or oversight, leaving journalists to do critical accountability work in real time. We hope this report helps make that work easier,” said Hannah Riley Fernandez, CJJ’s Director of Programming.
"The surveillance technology industry has a documented pattern of making unsubstantiated claims about technology,” said Conor Healy, IPVM's Director of Government Research. “Marketing is not a substitute for evidence. Journalists who go beyond press releases to critically examine vendor claims will often find solutions are not as magical as they may seem. In doing so, they perform essential accountability work that protects both taxpayer dollars and civil liberties."
EFF also maintains resources for understanding various police technologies and mapping those technologies in communities across the United States.
For the “Selling Safety” report: https://www.eff.org/document/selling-safety-journalists-guide-covering-police-technology
For EFF’s Street-Level Surveillance hub: https://sls.eff.org/
For EFF’s Atlas of Surveillance: https://www.atlasofsurveillance.org/
Contact: BerylLiptonSenior Investigative Researcherberyl@eff.org