EFF: Updates
The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People
The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.
There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.
Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector.
The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.”
The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded.
But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it. And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.
Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.
EFF has, and always will, fight for real and sustainable protections for our civil liberties including a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state.
The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People
The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.
There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.
Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector.
The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.”
The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded.
But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it. And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.
Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.
EFF has, and always will, fight for real and sustainable protections for our civil liberties including a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state.
EFF to Supreme Court: Shut Down Unconstitutional Geofence Searches
WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.
The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent.
Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time.
The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society.
"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world."
The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way.
"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."
For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief
Tags: geofence warrantsContact: AndrewCrockerSurveillance Litigation Directorandrew@eff.orgEFF to Supreme Court: Shut Down Unconstitutional Geofence Searches
WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.
The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent.
Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time.
The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society.
"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world."
The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way.
"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."
For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief
Tags: geofence warrantsContact: AndrewCrockerSurveillance Litigation Directorandrew@eff.orgEFF to Court: Don’t Make Embedding Illegal
Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.
The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.
But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.
The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.
Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.
Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.
Related Cases: Emmerich Newspapers v. Particle MediaEFF to Court: Don’t Make Embedding Illegal
Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.
The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.
But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.
The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.
Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.
Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.
Related Cases: Emmerich Newspapers v. Particle MediaNational Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’
SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour.
In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.
The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day.
The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.
Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20.
The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco.
Proceeds from sales of the book benefit EFF.
“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election
“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change
“Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record
For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/
For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party
For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender
Contact: KarenGulloSenior Writer for Free Speech and Privacykaren@eff.orgNational Book Tour for Cindy Cohn’s Memoir, ‘Privacy’s Defender’
SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will launch her memoir, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press, March 10), with events in San Francisco and Berkeley before embarking on a national book tour.
In Privacy’s Defender, Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.
The book will be Cohn’s swansong at EFF as she’s stepping down as executive director later this year after 25 years with the organization. And there’s no timelier topic: Everyone should be concerned about privacy right now, as the federal government consolidates and weaponizes data, companies track our every click, and law enforcement from local police to ICE keep tabs on all of us, everywhere we go, every day.
The Privacy’s Defender tour will begin with a free event at San Francisco’s famed City Lights Bookstore (261 Columbus Ave., San Francisco, CA 94133) moderated by bestselling author and EFF Special Advisor Cory Doctorow, at 7pm PST Tuesday, March 10.
Then EFF will host a launch party at Berkeley’s Ciel Creative Space (940 Parker St., Berkeley, CA 94710) moderated by bestselling author Annalee Newitz at 7 p.m. PT on Thursday, March 12; tickets cost $12.50-$20.
The book tour will also include events in Portland, OR; Seattle; Denver; Cambridge, MA; Ann Arbor, MI; and Iowa City, IA. Later events are being planned in New York City and Washington, D.C., as well as a May 13 event at Commonwealth Club World Affairs in San Francisco.
Proceeds from sales of the book benefit EFF.
“These beautifully written stories show why the fight for privacy is worth having and reveal all that Cindy Cohn and EFF have done to establish the modern privacy doctrine as the essential core of a free society.” -- Lawrence Lessig, Harvard University; author of How to Steal a Presidential Election
“Cindy Cohn gives readers a first-person window into some of the pivotal legal disputes of the digital era and reminds us that action and activism are crucial to preserving Americans’ freedom.” -- U.S. Sen. Ron Wyden, D-OR, author of It Takes Chutzpah: How to Fight Fearlessly for Progressive Change
“Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions.” -- Edward Snowden, whistleblower; author of Permanent Record
For the San Francisco event: https://citylights.com/events/cindy-cohn-launch-party-for-privacys-defender/
For the Berkeley event: https://www.eff.org/event/privacys-defender-book-launch-party
For more on Privacy’s Defender and the book tour: https://www.eff.org/Privacys-Defender
Contact: KarenGulloSenior Writer for Free Speech and Privacykaren@eff.orgVictory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data
In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.
The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.
The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.
In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.
It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.
Victory! Tenth Circuit Finds Fourth Amendment Doesn’t Support Broad Search of Protesters’ Devices and Digital Data
In a big win for protesters’ rights, the U.S. Court of Appeals for the Tenth Circuit overturned a lower court’s dismissal of a challenge to sweeping warrants to search a protester’s devices and digital data and a nonprofit’s social media data.
The case, Armendariz v. City of Colorado Springs, arose after a housing protest in 2021, during which Colorado Springs police arrested protesters for obstructing a roadway. After the demonstration, police also obtained warrants to seize and search through the devices and data of Jacqueline Armendariz Unzueta, who they claimed threw a bike at them during the protest. The warrants included a search through all of her photos, videos, emails, text messages, and location data over a two-month period, as well as a time-unlimited search for 26 keywords, including words as broad as “bike,” “assault,” “celebration,” and “right,” that allowed police to comb through years of Armendariz’s private and sensitive data—all supposedly to look for evidence related to the alleged simple assault. Police further obtained a warrant to search the Facebook page of the Chinook Center, the organization that spearheaded the protest, despite the Chinook Center never having been accused of a crime.
The district court dismissed the civil rights lawsuit brought by Armendariz and the Chinook Center, holding that the searches were justified and that, in any case, the officers were entitled to qualified immunity. The plaintiffs, represented by the ACLU of Colorado, appealed. EFF—joined by the Center for Democracy and Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—wrote an amicus brief in support of that appeal.
In a 2-1 opinion, the Tenth Circuit reversed the district court’s dismissal of the lawsuit’s Fourth Amendment search and seizure claims. The court painstakingly picked apart each of the three warrants and found them to be overbroad and lacking in particularity as to the scope and duration of the searches. The court further held that in furnishing such facially deficient warrants, the officers violated “clearly established” law and thus were not entitled to qualified immunity. Although the court did not explicitly address the First Amendment concerns raised by the lawsuit, it did note the backdrop against how these searches were carried out, including animus by Colorado Springs police leading up to the housing protest.
It is rare for appellate courts to call into question any search warrants. It’s even rarer for them to deny qualified immunity defenses. The Tenth Circuit’s decision should be celebrated as a big win for protesters and anyone concerned about police immunity for violating people’s constitutional rights. The case is now remanded back to the district court to proceed—and hopefully further vindicate the privacy rights we all have in our devices and digital data.
☺️ Trust Us With Your Face | EFFector 38.4
Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.
Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.4 - ☺️ Trust Us With Your Face
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!
☺️ Trust Us With Your Face | EFFector 38.4
Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.
Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.4 - ☺️ Trust Us With Your Face
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!
How to Pick Your Password Manager
Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.
Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.
In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.
There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.
Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.
How to Pick Your Password Manager
Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.
Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.
In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.
There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.
Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.
Tech Companies Shouldn’t Be Bullied Into Doing Surveillance
The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”
Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.
In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.
Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.
Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons.
Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.
Tech Companies Shouldn’t Be Bullied Into Doing Surveillance
The Secretary of Defense has given an ultimatum to the artificial intelligence company Anthropic in an attempt to bully them into making their technology available to the U.S. military without any restrictions for their use. Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance. The Department of Defense has reportedly threatened to label Anthropic a “supply chain risk,” in retribution for not lifting restrictions on how their technology is used. According to WIRED, that label would be, “a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work.”
Anthropic should stick by their principles and refuse to allow their technology to be used in the two ways they have publicly stated they would not support: autonomous weapons systems and surveillance.
In 2025, reportedly Anthropic became the first AI company cleared for use in relation to classified operations and to handle classified information. This current controversy, however, began in January 2026 when, through a partnership with defense contractor Palantir, Anthropic came to suspect their AI had been used during the January 3 attack on Venezuela. In January 2026, Anthropic CEO Dario Amodei wrote to reiterate that surveillance against US persons and autonomous weapons systems were two “bright red lines” not to be crossed, or at least topics that needed to be handled with “extreme care and scrutiny combined with guardrails to prevent abuses.” You can also read Anthropic’s self-proclaimed core views on AI safety here, as well as their LLM, Claude’s, constitution here.
Now, the U.S. government is threatening to terminate the government’s contract with the company if it doesn’t switch gears and voluntarily jump right across those lines.
Companies, especially technology companies, often fail to live up to their public statements and internal policies related to human rights and civil liberties for all sorts of reasons, including profit. Government pressure shouldn’t be one of those reasons.
Whatever the U.S. government does to threaten Anthropic, the AI company should know that their corporate customers, the public, and the engineers who make their products are expecting them not to cave. They, and all other technology companies, would do best to refuse to become yet another tool of surveillance.
EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.
LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.
It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.
Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.
EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.
EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.
LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.
It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.
Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.
EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.
EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea
Update, February 25, 2026: In response to widespread pushback, Wisconsin lawmakers have removed the provision banning VPN services from S.B. 130 / A.B. 105. The bill now awaits Governor Tony Evers’ signature. While the removal of the VPN provision is a positive step, EFF continues to oppose the bill. Advocates and residents across Wisconsin are urged to maintain pressure and encourage Governor Evers to veto the bill.
Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea.
It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow.
In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill.
You can read the full letter here.
The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit.
As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability.
The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach.
If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet.
As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
EFF to Wisconsin Legislature: VPN Bans Are Still a Terrible Idea
Wisconsin’s S.B. 130 / A.B. 105 is a spectacularly bad idea.
It’s an age-verification bill that effectively bans VPN access to certain websites for Wisconsinites and censors lawful speech. We wrote about it last November in our blog “Lawmakers Want to Ban VPNs—And They Have No Idea What They're Doing,” but since then, the bill has passed the State Assembly and is scheduled for a vote in the State Senate tomorrow.
In light of this, EFF sent a letter to the entire Wisconsin Legislature urging lawmakers to reject this dangerous bill.
You can read the full letter here.
The short version? This bill both requires invasive age verification for websites that host content lawmakers might deem “sexual” and requires that those sites block any user that connects via a Virtual Private Network (VPN). VPNs are a basic cybersecurity tool used by businesses, universities, journalists, veterans, abuse survivors, and ordinary people who simply don’t want to broadcast their location to every website they visit.
As we lay out in the letter, Wisconsin’s mandate is technically unworkable. Websites cannot reliably determine whether a VPN user is in Wisconsin, a different state, or a different country. So, to avoid liability, websites are faced with an unfortunate choice: either resort to over-blocking IP addresses commonly associated with commercial VPNs, block all Wisconsin users’ access, or mandate nationwide restrictions just to avoid liability.
The bill also creates a privacy nightmare. It pushes websites to collect sensitive personal data (e.g. government IDs, financial information, biometric identifiers) just to access lawful speech. At the same time, it broadens the definition of material deemed “harmful to minors” far beyond the narrow categories courts have historically allowed states to regulate. The definition goes far beyond the narrow categories historically recognized by courts (namely, explicit adult sexual materials) and instead sweeps in material that merely describes sex or depicts human anatomy. This approach invites over-censorship, chills lawful speech, and exposes websites to vague and unpredictable enforcement. That combination—mass data collection plus vague, expansive speech restrictions—is a recipe for over-censorship, data breaches, and constitutional overreach.
If you live in Wisconsin, now is the time for you to contact your State Senator and urge them to vote NO on S.B. 130 / A.B. 105. Tell them protecting young people online should not mean undermining cybersecurity, chilling lawful speech, and forcing residents to hand over their IDs just to browse the internet.
As we said last time: Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
