EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 6 hours 45 min ago

Banning New Foreign Routers Mistargets Products to Fix Real Problem

Wed, 04/08/2026 - 3:24pm

On March 23, the FCC issued an update to their Covered List, a list of equipment banned from obtaining regulatory approval necessary for U.S. sale (and thus effectively a ban on sale of new devices), to include all new routers produced in foreign countries unless they are specifically given an exception by the Department of Defense (DoD) or DHS. The Commission cited “security gaps in foreign-made routers” leading to widespread cyberattacks as justification for the ban, mentioning the high-profile attacks by Chinese advanced persistent threat actors Volt, Flax, and Salt Typhoon. Although the stated intention is to stem the very real threat of domestic residential routers being commandeered to initiate attacks and act as residential proxies, this sweeping move serves as a blunt instrument that will impact many harmless products. In addition to being far too broad, it won’t even affect many vulnerable devices that are most active in these types of attacks: IoT and connected smart home devices.

Previously, the FCC had changed the Covered List to ban hardware by specific vendors, such as telecom equipment produced by companies Huawei and Hytera in 2021. This new blanket ban, in contrast, affects the importation and sale of almost all new consumer routers. It does not affect consumer routers produced in the United States, like Starlink in Texas. While some of the affected routers will be vulnerable to compromises that hijack the devices and use them for cybercrime and attacks, this ban does not distinguish between companies with a track-record of producing vulnerable products and those without. As a result, instead of incentivizing security-minded production, this will only limit the options consumers have to US-based manufacturers not affected by the ban—even those that lack stellar security reputations themselves.

While the sale of vulnerable routers in the U.S. will not stop, the announcement quoted an Executive Branch determination that foreign produced routers introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense.” Yet this move does nothing to address the growing number of connected devices involved in the attacks this ban aims to address. As we have previously pointed out, supply chain attacks have resulted in no-name Android TV boxes preloaded with malware, sold by retail giants like Amazon, fuelling the massive Kimwolf and BADBOX 2 fraud and residential proxy botnets. Banning the specific models and manufacturers we know produce dangerous devices putting its purchasers at risk, rather than issuing blanket bans punishing reputable brands that do better, should be the priority.

With the FCCs top commissioner appointed by the President, this ban comes as other parts of the administration impose tariffs and issue dozens of trade-related executive orders aimed at foreign goods. A few larger companies with pockets deep enough to invest in manufacturing plants within the U.S. may see this as an opportune moment, while others not as well poised to begin U.S. operations may attempt to curry enough favor to be added to the DoD or DHS exception lists. At best, this will result in the immediate effect of an ill-targeted policy that does little to improve domestic cybersecurity posture. At worst, it entrenches existing players and deepens problematic quid-pro-quo arrangements.

American consumers deserve better. They deserve the assurance that the devices they use, whether routers or other connected smart home devices, are built to withstand attacks that put themselves and others at risk, no matter where they are manufactured. For this, a nuanced, careful consideration of products (such as was part of the FCC’s 2023-proposed U.S. Cyber Trust Mark) is necessary, rather than blanket bans.

Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law

Wed, 04/08/2026 - 2:13pm

Another court has ruled that copyright can’t be used to keep our laws behind a paywall. The U.S. Court of Appeals for the Third Circuit upheld a lower court’s ruling that it is fair use to copy and disseminate building codes that have been incorporated into federal and state law, even though those codes are developed by private parties who claim copyright in them. The court followed the suggestions EFF and others presented in an amicus brief, and joined a growing list of courts that have placed public access to the law over private copyright holders’ desire for control.

UpCodes created a database of building codes—like the National Electrical Code—that includes codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law, and therefore has the right to control how the public accesses and shares them. Fortunately, neither the Constitution nor the Copyright Act support that theory. Faced with similar claims, some courts, including the Fifth Circuit Court of Appeals, have held that the codes lose copyright protection when they are incorporated into law. Others, like the D.C. Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that, whether or not the legal status of the standards changes once they are incorporated into law, making them fully accessible and usable online is a lawful fair use.

In this case, the Third Circuit found that UpCodes’s copying of the codes was a fair use, in a decision closely following the D.C. Circuit’s reasoning. Fair use turns on four factors listed in the Copyright Act, and the court found that all four favored UpCodes to some degree.

On the first factor, the purpose and character of the use, the court found that UpCodes’s use was “transformative” because it had a separate and distinct purpose from ASTM—informing people about the law, rather than just best practices in the building industry. No matter that UpCodes was copying and disseminating entire safety codes verbatim—using the codes for a different purpose was enough. And UpCodes being a commercial venture didn’t change the outcome either, because UpCodes wasn’t charging for access to the codes.

On the second factor, the nature of the copyrighted work, the Third Circuit joined other appeals courts in finding that laws are facts, and stand at “the periphery of copyright’s core protection.” And this included codes that were “indirectly” incorporated—meaning that they were incorporated into other codes that were themselves incorporated into law.

The third factor looks at the amount and substantiality of the material used. The court said that UpCodes could not have accomplished its purpose—providing access to the current binding laws governing building construction—without copying entire codes, so the copying was justified. Importantly, the court noted that UpCodes was justified in copying optional parts of the codes as well as “mandatory” sections because both help people understand what the law is.

Finally, the fourth factor looks at potential harm to the market for the original work, balanced against the public interest in allowing the challenged use. The court rejected an argument frequently raised by copyright holders—that harm can be assumed any time materials are posted to the internet for all to access. Instead, the court held that when a use is transformative, a rightsholder has to bring evidence of harm, and that harm will be balanced against the public benefit. Because “enhanced public access to the law is a clear and significant public benefit,” and ASTM hadn’t shown significant evidence that UpCodes had meaningfully reduced ASTM’s revenues, the fourth factor was at least neutral. It didn’t matter to the court that ASTM offered to provide copies of legally binding standards to the public on request, because “the mere possibility of obtaining a free technical standard does not nullify the public benefits associated with enhanced access to law.”

This is a good result that will expand the public’s access to the laws that bind us—something that’s more important than ever given recent assaults on the rule of law. In the future, we hope that courts will recognize that codes and standards lose copyright when they are incorporated into law, so that people don’t have to spend years and legal fees litigating fair use just to exercise their rights.

👁 Selling Mass Surveillance | EFFector 38.7

Wed, 04/08/2026 - 12:24pm

Time and time again, we've seen police surveillance suffer from 'mission creep'—technology sold as a way to prevent heinous crimes ends up enforcing traffic violations, tracking protestors, and more. In our latest EFFector newsletter, we're diving into this troubling pattern and sharing all the latest in the fight for privacy and free speech online.

JOIN OUR NEWSLETTER

For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This week's issue covers the urgent need to reform NSA spying; a victory for internet access in the Supreme Court; and how license plate readers are normalizing mass surveillance.

Prefer to listen in? EFFector is now available on all major podcast platforms. This time, we're chatting with EFF Privacy Litigation Director Adam Schwartz about some of the recent technologies we've seen suffer from "mission creep." And don't miss the EFFector news quiz! You can find the episode and subscribe on your podcast platform of choice

%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F2ff7f80b-1fbe-4013-97b6-43873a6785ac%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

Want to help us push back against mass surveillance? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight for privacy and free speech online when you support EFF today!

Digital Hopes, Real Power: How the Arab Spring Fueled a Global Surveillance Boom

Wed, 04/08/2026 - 4:22am

This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the first post here, and the second here.

When people remember the 2011 uprisings across the Middle East and North Africa (MENA), they picture crowded squares, raised phones, and the feeling that the internet had finally shifted the balance of power toward ordinary people. But the past decade and a half is also a story about how governments, companies, and platforms turned those same tools into the backbone of a powerful state surveillance apparatus.

For activists, journalists, and everyday users, that means now living with a constant threat: the phone in your pocket, the platforms you organize on, and the systems you rely on for safety and connection can be weaponized at the flip of a switch. A global surveillance industry has treated repression by many MENA governments as a growth opportunity, and the tactics refined there now shape digital authoritarianism worldwide. This essay traces how that shift unfolded: security agencies upgraded older systems of repression with new surveillance tools and permanent monitoring infrastructure; cybercrime laws and mercenary spyware markets turned digital control into standard operating procedure; and biometrics, facial recognition, and ‘smart city’ projects laid the groundwork for AI‑driven surveillance that now shapes protests, borders, and everyday life far beyond the region. 

Remembering the Arab Spring today means seeing the events of 2011 as both a remarkable moment of movement history when people leveraged networked tools in their fight for freedom and the beginning of a long, grinding effort to turn those same tools into mechanisms of state control.

Old‑School Repression, New‑School Tools

Long before Facebook and Twitter, regimes in places like Egypt and Syria already knew how to crush dissent. They leaned on informant networks, physical surveillance, and wiretaps, backed by emergency laws that let security agencies monitor and detain critics with almost no restraint. Research on the use of surveillance technology in MENA shows that, even before the Arab Spring, states were layering early digital tools like internet monitoring, deep packet inspection, and interception centers on top of that older machinery of control.

At the same time, connectivity was racing ahead. Cheap smartphones and social media suddenly let people share information at scale, coordinate protests, and broadcast abuses in real time. In 2011, EFF described both the excitement around “Facebook revolutions” and the early signs that governments were scrambling to upgrade their capacity to watch and disorganize popular dissent.

After the uprisings, Western critics endlessly debated how much credit to give social media itself. While in the background, security agencies across several MENA states reached a much simpler conclusion: if networked communication can help topple a dictator, then they needed to embed themselves deep inside those networks. Analyses of the rise of digital authoritarianism in MENA show how quickly officials pivoted from being surprised by online organizing to building systems to monitor and pre‑empt it.

In the years after 2011, governments across the region poured money into expanding internet monitoring and deep packet inspection, investing heavily in tools that let them systematically watch what people said and did on major platforms. Foreign vendors set up monitoring centers and interception systems that let security agencies block tens of thousands of sites, scrape and analyze social media at scale, monitor activist pages and online communities, and track activists in real time. They took the lesson of 2011 and built a new, pre‑emptive model of digital control, one that assumes the state should see as much as possible, as early as possible.

As we noted in 2011, exporting permanent surveillance infrastructure to already‑abusive governments doesn’t “modernize” public safety; it locks in an architecture of control that is primed to abuse dissidents, journalists, and marginalized communities.

Domestic Lawfare and Cyber-Mercenaries

The surveillance tech stack was only half the story. After the uprisings, a number of governments also rewrote the rules that govern online life. Cybercrime laws, “fake news” provisions, and overbroad public‑order and ‘morality’ offences gave prosecutors and security agencies legal cover to act with impunity. Governments in Saudi Arabia, Tunisia, Jordan, and Egypt combined counterterrorism, cybercrime, defamation, and protest laws into a legal thicket designed to make online dissent feel dangerous and costly. Morality laws and cybercrime provisions are used to target queer and trans people based on identity and expression.​

At the United Nations, a new global cybercrime convention now risks baking this logic into international law. The convention was adopted by the UN General Assembly in late 2024, despite serious human rights concerns raised by civil society. Echoing our partners, EFF warned at the time that the UN cybercrime draft convention remained too flawed to adopt and urged states to reject the draft language because it legitimized expansive surveillance powers and criminalized legitimate expression, security research, and everyday digital practices around the world. 

While on paper, these instruments gesture to “public safety” objectives, in practice they function as pathways for state security agencies to monitor, prosecute, and silence the communities most at risk. For state-targeted communities, that makes being visible online a calculated risk, not a neutral choice.​​

But criminal codes are only half the story. Mercenary tech is the other. 

As governments worldwide looked for ways to outpace their critics, a parallel market emerged to help them infiltrate and take over devices. Companies like NSO Group marketed Pegasus and similar tools as off‑the‑shelf capabilities for governments that wanted to hack a target’s cellphones or other devices to read messages, turn on microphones, and monitor entire social networks while bypassing the courts. 

In 2019, UN Special Rapporteur David Kaye called for a global moratorium on the sale and transfer of private surveillance tools until real, enforceable safeguards exist. Two years later, forensic work by Amnesty and media partners showed how the same spyware used to hack phones of Palestinian human‑rights defenders was used to surveil journalists, activists, lawyers, and political opponents across dozens of countries

Regional groups responded by demanding an end to the sale of surveillance technology to autocratic governments and security agencies, arguing that you cannot keep selling “lawful intercept” tools into systems where law itself is an instrument of repression. Commercial spyware is at the center of digital repression, not at its margins. Surveillance vendors are not neutral suppliers. Safeguards remain weak, fragmented, or nonexistent in most of the countries buying these tools, yet vendors continue seeking new contracts and new militarized “use cases.” In other words, the companies that design, market, and maintain these systems precisely because they enable this kind of control profit from and help entrench authoritarian power.

Biometrics, Facial Recognition, and AI‑Powered Surveillance Cities

On top of this rapidly intensifying interception and spyware stack, governments and companies began layering biometrics and face recognition into everyday systems, creating pathways for bulk data collection, automated analysis, and risk profiling. In parts of MENA, national ID schemes, border and migration controls, and centralized biometric databases have been rolled out in environments with weak or captured data‑protection laws, making it easy to link people’s movements, services, and political activity to a single, persistent identifier.​

Humanitarian programs are not exempt from this protocol. In Jordan, Syrian refugees have been required to submit iris scans and biometric data to access cash assistance and food, turning “consent” into a precondition for survival. When access to aid depends on enrollment in centralized biometric systems, any breach, misuse, or repurposing of that data can have severe, life‑altering consequences for people who have no realistic way to opt out. Investigations into surveillance‑tech firms complicit in abuses in MENA show that vendors profit from supplying biometric and surveillance tools for migration management and internal security, even when those tools are used in discriminatory or abusive ways.​

Mass, indiscriminate surveillance technologies were first piloted in MENA on people who are already criminalized or made vulnerable by poverty, but their use quickly expanded from narrow, security‑framed deployments at borders and checkpoints to routine use in welfare offices, aid distribution sites, and city streets. As hardware for sensors, cameras, and data storage got cheaper and “smart city” surveillance systems promised seamless security and services, it became easier and less politically contentious to keep these systems running everywhere, all the time.​

Unlike targeted hacking tools, these broad, city‑wide surveillance infrastructures built on camera networks, persistent sensors, and biometric databases erase any practical line between people under investigation and the broad public, normalizing bulk, indiscriminate monitoring of public space and everyday movement. In the Gulf, facial recognition and dense sensor networks are increasingly built into high‑profile “smart city” and mega‑project plans that lean heavily on biometric and AI‑driven monitoring. These are security‑first development projects where biometric and sensor infrastructures are designed from the outset to embed policing, migration control, and commercial tracking into the urban fabric. In this vision of the Gulf’s “smart city” future—often sold as seamless services and digital opportunity—“smart” is the branding, and pervasive monitoring is the operating principle.​​

EFF has consistently opposed government use of face recognition and biometric surveillance, in some instances calling for outright bans. In contexts that treat peaceful dissent as a security threat, embedding biometric surveillance into everyday infrastructure locks in a balance of power that favors militarized policing and state control. That infrastructure is now the starting point for a new set of risks. Surveillance systems built over the last decade are being repackaged as the foundation for a new generation of “AI‑enabled” defense and security products. 

Companies that once focused on video management or perimeter security now advertise “defense applications” for AI‑driven situational awareness and threat detection, using computer‑vision models to scan camera feeds, compare against existing watchlists, and flag “suspicious” people or behaviors in real time. Drone and sensor platforms are being upgraded with embedded AI that tracks and classifies targets autonomously and with “drone‑based AI threat detection and intelligent situational awareness,” turning aerial surveillance into a continuous data feed for security agencies and militaries. In smart‑city and defense expos from the Gulf to Europe and North America, similar systems are marketed as neutral efficiency upgrades or tools to “protect critical infrastructure,” even where they are explicitly designed to scale up border enforcement, protest surveillance, and internal security operations.

As these systems are folded into AI‑driven defense products, the line between “civilian” infrastructure and militarized surveillance disappears, turning streets, borders, and aid sites into continuous input for security operations. That is the landscape that human rights and accountability efforts now have to confront.

Templates of Control, Networks of Resistance

The patterns established in heavily securitized MENA states after the Arab Spring now shape how states monitor and crush more recent uprisings, from Iran’s use of location data and facial recognition to track down protesters to long‑running crackdowns elsewhere in the region. This model of “digital authoritarianism” built on spyware, data‑hungry ID systems, platform control, and emergency‑style security laws has emerged everywhere from Latin America to Eastern Europe to here in the United States. As the new UN Cybercrime Convention moves toward implementation, its broad offences and surveillance powers risk turning this ad hoc toolkit into a formal template for cross‑border data‑sharing, repression, and an all‑purpose global surveillance instrument.

For people on the ground, none of this is theoretical. Human‑rights defenders, journalists, and ordinary users across the region face arrest, long prison sentences, and exile based on their digital traces. In that landscape, commercial spyware is not a side issue but part of the core machinery of repression. Pegasus has been used to hack journalists’ phones through zero‑click exploits and compromise human‑rights defenders and watchdog organizations themselves, including staff at Amnesty’s Pegasus Project partners and Human Rights Watch. These deployments give practical effect to the “cybercrime” and “terrorism” frameworks described earlier: person‑by‑person campaigns against particular communities, contacts, and networks, rather than neutral, generalized security measures.

Under these conditions, everyday security becomes a second job. People describe carrying multiple phones, keeping one for relatively “clean” uses and others for riskier conversations, splitting identities across platforms, using coded language, and moving their organizing off mainstream services when possible. Pushing this burden onto users is a political choice: states, platforms, and vendors could build systems that are safe by design; instead, they externalize risk to the people they watch and punish.

Even against that backdrop, civil society organizations have refused to cede the terrain to security agencies and vendors. Regional coalitions have demanded strict export controls and outright bans on selling intrusive surveillance tech to autocratic governments

Advocates have also pushed companies to do more than box‑ticking “due diligence.” Work with surveillance‑tech firms in the context of migration and border control has repeatedly shown that most are still far from serious human‑rights assessments, let alone willing to turn down these lucrative contracts.

Many of the same governments that have been critical of others on the issue of human rights have hosted or licensed companies that build these tools, in some cases buying similar capabilities for their own security agencies. European authorities, for instance, have investigated FinFisher’s export of spyware “made in Germany” to Turkey and other non‑EU governments. Meanwhile, the NSO Group has at least 22 Pegasus contracts with security and law‑enforcement agencies in 12 EU countries. This is a transnational industry, not a localized problem.

Against near impossible odds, people continue finding pathways to freedom. The global surveillance sector reinforces the same hierarchies and violence that people have found ways to survive against for generations. Queer activists and others at the sharpest edges of this system have had to develop their own forms of resistance, including against biometric and data‑driven targeting. Encryption, circumvention tools, and security training are not silver bullets, but they remain essential for anyone trying to organize, document abuses, or simply exist online with a bit less risk. Resources like EFF’s Surveillance Self‑Defense are one piece of that ecosystem, alongside trainers and groups who have been doing this work on the ground for years.​

Remembering the Arab Spring in this context means not only tracing how surveillance expanded in its wake, but lifting up the people and coalitions who are still pushing back against that infrastructure today.​

Defending the Future of Digital Dissent

The Arab Spring is often remembered through images of packed squares and hopeful tweets. But living with its aftermath means confronting the surveillance architecture built in its shadow: laws that turn online speech into a crime, spyware and biometric systems that turn phones and faces into tracking beacons, and platform practices that routinely sacrifice the people most at risk. None of that is inevitable, and none of it is confined to one part of the world.

Accountability has to reach both governments and the companies that profit from arming them with these tools. That means pushing for far stronger limits on how surveillance tech is built, sold, and deployed; demanding meaningful transparency when these systems are used; and defending the tools people rely on to communicate and organize safely, including robust encryption and secure channels. It also means taking direction from people in the region who have been navigating and resisting this landscape for years, rather than only paying attention once similar abuses show up elsewhere.

Surveillance itself is transnational: tools are exported, playbooks are copied, and data moves across borders as easily as money. And so we continue our work, documenting abuses, sharing security knowledge, and collectively organizing against these violent systems.

This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

EU Parliament Blocks Mass-Scanning of Our Chats—What's Next?

Tue, 04/07/2026 - 3:24pm

The EU’s so-called Chat Control plan, which would mandate mass scanning and other encryption breaking measures, has had some good news lately. The most controversial idea, the forced requirement to scan encrypted messages, was given up by EU member states. And now, another win for privacy: the EU Parliament has dealt a real blow to voluntary mass-scanning of chats by voting to not prolong an interim derogation from e-Privacy rules in the EU. These rules allowed service providers, temporarily, to scan private communication.  

But no one should celebrate just yet. We said there is more to it, and voluntary scanning is a key part. Unlike in the U.S., where there is no comprehensive federal privacy law, the general and indiscriminate scanning of people’s messages is not legal in the EU without a specific legal basis. The e-Privacy derogation law, which gave (limited) cover for such activities, has now expired. Does that mean mass scanning will stop overnight?  

Not really. 

Companies have continued similar scanning practices during past gaps. Google, Meta, Microsoft, and Snap have already signaled in a joint statement to “continue to take voluntary action on our relevant Interpersonal Communication Services.” Whether this indicates continued scanning of our private communication is not entirely clear, but what is clear is that such activity would now risk breaching EU law. Then again, lack of compliance with EU data protection and privacy rules is nothing new for big tech in Europe. 

Most importantly, the “Chat Control” proposal for mandatory detection of child abuse material (CSAM) is still alive and being negotiated. It has shifted the focus toward so-called risk mitigation measures, such as problematic age verification and voluntary activities. If platforms are expected to adopt these as part of their compliance, they risk no longer being truly voluntary. While mass scanning may be gone on paper, some broader concerns remain.  

So, where does this leave us? The immediate priority is to make sure the expired exception for mass scanning is not revived. At the same time, lawmakers need to pull the teeth from the currently negotiated Chat Control proposal by narrowing risk mitigation measures. This means ensuring that age verification does not become a default requirement and “voluntary activities” are not turned into an expectation to scan our communications.   

As we said before, this is a zombie proposal. It keeps coming back and must not be allowed to return through the back door. 

Triple Header for Privacy’s Defender in New York

Fri, 04/03/2026 - 7:15pm

You’re invited on a journey inside the privacy battles that shaped the internet. EFF’s Executive Director Cindy Cohn has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet.

Join Cindy at three events in New York discussing her bestselling new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, on sale now. All proceeds from the book benefit EFF. Find the full event details below, and RSVP to let us know if you can make it.

April 20 - With Women in Security and Privacy (WISP)

Join Women in Security and Privacy (WISP) and EFF for a conversation featuring American University Senior Professorial Lecturer Chelsea Horne and EFF Executive Director Cindy Cohn as they dive into data security, Federal access to data, and your digital rights.


Privacy's Defender with WISP
Kennedys
22 Vanderbilt Avenue, Suite 2400, New York, NY 10017
Monday, April 20, 2026
6:00 pm to 8:00 pm
REGISTER NOW


April 21 - With Julie Samuels at Civic Hall

Join Tech:NYC President and CEO Julie Samuels, in conversation with EFF Executive Director Cindy Cohn for a discussion about Cindy's work, her new book, and what we're all wondering: Can have private conversations if we live our lives online?


Privacy's Defender at Civic Hall
Civic Hall
124 E 14th St, New York, NY 10003
Tuesday, April 21, 2026
6:00 pm to 9:00 pm
REGISTER NOW


April 23 - With Anil Dash at Brooklyn Public Library

Join antitech Principal & Cofounder Anil Dash, in conversation with EFF Executive Director Cindy Cohn to discuss Cindy's new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance.


Privacy's Defender at Brooklyn Public Library
Brooklyn Public Library - Central Library, Info Commons Lab
10 Grand Army Plz 1st floor, Brooklyn, NY 11238
Thursday, April 23, 2026
6:00 pm to 7:30 pm
REGISTER NOW


"Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions."
~Edward Snowden, whistleblower; author of Permanent Record

Can't make it? Look for Cindy at a city (or web connection) near you! Find the latest tour dates on the Privacy’s Defender hub or follow EFF for more.

Part memoir and part legal history for the general reader, Privacy’s Defender is a compelling testament to just how much privacy and free expression matter in our efforts to combat authoritarianism, grow democracy, and strengthen human rights. Thank you for being a part of that fight.

Want to support the cause and get a copy of the new book? New or renewing EFF members can preorder one as their annual gift!

The FAA’s “Temporary” Flight Restriction for Drones is a Blatant Attempt to Criminalize Filming ICE

Fri, 04/03/2026 - 6:25pm

Legal intern Raj Gambhir was the principal author of this post.

The Trump administration has restricted the First Amendment right to record law enforcement by issuing an unprecedented nationwide flight restriction preventing private drone operators, including professional and citizen journalists, from flying drones within half a mile of any ICE or CBP vehicle.

In January, EFF and media organizations including The New York Times and The Washington Post responded to this blatant infringement of the First Amendment by demanding that the FAA lift this flight restriction. Over two months later, we’re still waiting for the FAA to respond to our letter.

The First Amendment guarantees the right to record law enforcement. As we have seen with the extrajudicial killings of George Floyd, Renée Good, and Alex Pretti, capturing law enforcement on camera can drive accountability and raise awareness of police misconduct.

A 21-Month Long “Temporary” Flight Restriction?

The FAA regularly issues temporary flight restrictions (TFRs) to prevent people from flying into designated airspace. TFRs are usually issued during natural disasters, or to protect major sporting events and government officials like the president, and in most cases last mere hours.

Not so with the restriction numbered FDC 6/4375, which started on January 16, 2026. This TFR lasts for 21 months—until October 29, 2027—and covers the entire nation. It prevents any person from flying any unmanned aircraft (i.e., a drone) within 3000 feet, measured horizontally, of any of the “facilities and mobile assets,” including “ground vehicle convoys and their associated escorts,” of the Departments of Defense, Energy, Justice, and Homeland Security. Violators can be subject to criminal and civil penalties, and risk having their drones seized or destroyed.

In practical terms, this TFR means that anyone flying their drone within a half mile of an ICE or CBP agent’s car (a DHS “mobile asset”) is liable to face criminal charges and have their drone shot down. The practical unfairness of this TFR is underscored by the fact that immigration agents often use unmarked rental cars, use cars without license plates, or switch the license plates of their cars to carry out their operations. Nor do they provide prior warning of those operations.

The TFR is an Unconstitutional Infringement of Free Speech

While the FAA asserts that the TFR is grounded in its lawful authority, the flight restriction not only violates multiple constitutional rights, but also the agency’s own regulations.

First Amendment violation. As we highlighted in the letter, nearly every federal appeals court has recognized the First Amendment right of Americans to record law enforcement officers performing their official duties. By subjecting drone operators to criminal and civil penalties, along with the potential destruction or seizure of their drone, the TFR punishes—without the required justifications—lawful recording of law enforcement officers, including immigration agents.  

Fifth Amendment violation. The Fifth Amendment guarantees the right to due process, which includes being given fair notice before being deprived of liberty or property by the government. Under the flight restriction, advanced notice isn’t even possible. As discussed above, drone operators can’t know whether they are within 3000 horizontal feet of unmarked DHS vehicles. Yet the TFR allows the government to capture or even shoot down a drone if it flies within the TFR radius, and to impose criminal and civil penalties on the operator.

Violations of FAA regulations. In issuing a TFR, the FAA’s own regulations require the agency to “specify[] the hazard or condition requiring” the restriction. Furthermore, the FAA must provide accredited news representatives with a point of contact to obtain permission to fly drones within the restricted area. The FAA has satisfied neither of these requirements in issuing its nationwide ban on drones getting near government vehicles.

EFF Demands Rescission of the TFR

We don’t believe it’s a coincidence that the TFR was put in place in January 2026, at the height of the Minneapolis anti-ICE protests, shortly after the killing of Renée Good and shortly before the shooting of Alex Pretti. After both of those tragedies, civilian recordings played a vital role in contradicting the government’s false account of the events.

By punishing civilians for recording federal law enforcement officers, the TFR helps to shield ICE and other immigration agents from scrutiny and accountability. It also discourages the exercise of a key First Amendment right. EFF has long advocated for the right to record the police, and exercising that right today is more important than ever.

Finally, while recording law enforcement is protected by the First Amendment, be aware that officers may retaliate against you for exercising this right. Please refer to our guidance on safely recording law enforcement activities.

Tech Nonprofits to Feds: Don’t Weaponize Procurement to Undermine AI Trust and Safety

Fri, 04/03/2026 - 1:37pm

While the very public fight continues between the Department of Defense and Anthropic over whether the government can punish a company for refusing to allow its technology to be used for mass surveillance, another branch of the U.S. government is quietly working to ensure that this dispute will never happen again. How? By rewriting government procurement rules.

Using procurement -- meaning, the processes by which governments acquire goods and services-- to accomplish policy goals is a time-honored and often appropriate strategy. The government literally expresses its politics and priorities by deciding where and how it spends its money. To that end, governments can and should give our tax dollars to companies and projects that serve the public interest, such as open-source software development, interoperability, or right to repair. And they should withhold those dollars from those that don’t, like shady contractors with inadequate security systems .

New proposed rules from the principal agency in charge of acquiring goods, property and services for the federal government, the General Services Administration, are supposed to be primarily an effort to implement one policy priority: promoting steering government funds toward “ideologically neutral” American AI innovation But the new guidelines do far more than that.

As explained in comments filed today with our partners at the Center for Democracy and Technology, the Protect Democracy Project, and the Electronic Privacy Information Center, the GSA’s guidelines include broad provisions that would make AI tools less safe and less useful. If finally adopted, these provisions would become standard components of every federal contract. You can read the full comments here.

The most egregious example is a requirement that contractors and government service providers must license their AI systems to the government for “all lawful purposes.” Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we need serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.

Relatedly, the draft rules require that “AI System(s) must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.” In other words, if a company’s safety guardrails might prevent responding to a government request, the company must disable those guardrails. Given widespread public concerns about AI safety, it seems misguided, at best, to limit safeguards a company deems necessary.

There are myriad other problems with the draft rules, such as technologically incoherent “anti-Woke” requirements. But the overarching problem is clear: much of this proposal would not serve the overall public interest in using American tax dollars to promote privacy, safety, and responsible technological innovation. The GSA should start over.

Note they are also about implementing "anti-woke" tech which is even more stupid. I rewrote to allude to it but really that's a whole other blog post

Double Shot of Privacy's Defender in D.C.

Fri, 04/03/2026 - 11:58am

You’re invited on a journey inside the privacy battles that shaped the internet. EFF’s Executive Director Cindy Cohn has tangled with the feds, fought for your data security, and argued before judges to protect our access to science and knowledge on the internet.

Join Cindy at two events in Washingtion, D.C. on April 13 and 14 discussing her new book: Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, on sale now. All proceeds from the book benefit EFF. Find the full event details below, and RSVP to let us know if you can make it.

April 13 - With Gigi Sohn at Busboys & Poets

Join American Association of Public Broadband (AAPB) Executive Director Gigi Sohn, in conversation with EFF Executive Director Cindy Cohn for a discussion about Cindy's work, her new book, and what we're all wondering: Can have private conversations if we live our lives online?

Privacy's Defender at Busboys & Poets
Busboys & Poets - 14th & V
2021 14th St NW, Washington, DC 20009
Monday, April 13, 2026
6:30 pm to 8:30 pm

Register Now

April 14 - With Women in Security and Privacy (WISP)

Join Women in Security and Privacy (WISP) and EFF for a conversation featuring American University Senior Professorial Lecturer Chelsea Horne and EFF Executive Director Cindy Cohn as they dive into data security, Federal access to data, and your digital rights. 

Privacy's Defender with WISP
True Reformer Building - Lankford Auditorium
1200 U St NW, Washington, DC 20009
Tuesday, April 14, 2026
6:00 pm to 8:30 pm

REGISTER NOW

"Privacy’s Defender is a compelling account of a life well lived and an inspiring call to action for the next generation of civil liberties champions."

~Edward Snowden, whistleblower; author of Permanent Record

Can't make it? Look for Cindy at a city (or web connection) near you! Find the latest tour dates on the Privacy’s Defender hub or follow EFF for more.

Part memoir and part legal history for the general reader, Privacy’s Defender is a compelling testament to just how much privacy and free expression matter in our efforts to combat authoritarianism, grow democracy, and strengthen human rights. Thank you for being a part of that fight.

Want to support the cause and get a copy of the new book? New or renewing EFF members can preorder one as their annual gift!

Weakening Speech Protections Will Punish All of Us—Not Just Meta

Thu, 04/02/2026 - 6:43pm

Recently, a California Superior Court jury found that Meta and YouTube harmed a user through some of the features they offered. And a New Mexico jury concluded that Meta deceived young users into thinking its platforms were safe from predation. 

It’s clear that many people are frustrated by big tech companies and perhaps Meta in particular. We too have been highly critical of them and have pushed for years to end their harmful corporate surveillance. So it’s not surprising that a jury felt like Mark Zuckerberg and his company, along with YouTube, needed to be held accountable. 

While it would be easy to claim that these cases set a legal precedent that should make social media companies fearful, that’s not exactly true. And that’s actually a good thing for the internet and its users. 

These jury trials were just an early step in a long road through the court system. These cases will now go up on appeal, where the courts’ rulings about the First Amendment and immunity under Section 230 will likely get reconsidered. 

As we have argued many times before, the First Amendment protects both user speech and the choices platforms make on how to deliver that speech (in the same way it protects newspapers' right to curate their editorial pages as they see fit). Features on social media sites that are designed to connect users cannot be separated from the users’ speech, which is why courts have repeatedly held that these features are indeed protected. 

So while it may be tempting to celebrate these juries’ decisions as a "win" against big tech, in fact the ramifications of lowering First Amendment and immunity standards on other speakers—ones that members of the public actually like, and do not want to punish—are bad. We can’t create less protective speech rules for Meta and Google alone just because we want them held accountable for something else.

As we have often said, much of the anger against these companies arises from people rightfully feeling that these companies harvest and exploit their data, and monetize their lives for crass economic reasons. We therefore continue to urge Congress to pass a comprehensive national privacy law with a private right of action to address these core concerns.

A Baseless Copyright Claim Against a Web Host—and Why It Failed

Thu, 04/02/2026 - 5:34pm

Copyright law is supposed to encourage creativity. Too often, it’s used to extract payouts from others.

Higbee & Associates, a law firm known for sending copyright demand letters to website owners, targeted May First Movement Technology, accusing it of infringing a photograph owned by Agence France-Presse (AFP). The claim was baseless. May First didn’t post the photo. It didn’t even own the website where the photo appeared.

May First is a nonprofit membership organization that provides web hosting and technical infrastructure to social justice groups around the world. The allegedly infringing image was posted years ago by one of May First’s members, a human rights group based in Mexico. When May First learned about the copyright complaint, it ensured that the group removed the image.

That should have been the end of it. Instead, the firm demanded payment.

So EFF stepped in as May First’s counsel and explained why AFP and Higbee had no valid claim. After receiving our response, Higbee backed down.

This outcome is a reminder that targets of copyright demands often have strong defenses—especially when someone else posted the material.

Hosting Content Isn’t the Same as Publishing It

Copyright law treats those who create or control content differently from those who simply provide the tools or infrastructure for others to communicate.

In this case, May First provided hosting services but didn’t post the photo. Courts have long recognized that service providers aren’t direct infringers when they merely store material at the direction of users. In those cases, service providers lack “volitional conduct”—the intentional act of copying or distributing the work.

Copyright law also recognizes that intermediaries can’t realistically police everything users upload. That’s why legal protections like the Digital Millennium Copyright Act safe harbors exist. Even outside those safe harbors, courts still shield service providers from liability when they promptly respond to notices.

May First did exactly what the law expects: it notified its member, and the image came down.

A Claim That Should Have Been Withdrawn Much Sooner

The troubling part of this story isn’t just that a demand was sent. It’s that Higbee and AFP continued to demand money and threaten litigation after May First explained that it was merely a hosting provider and had the image removed.

In other words, the claim was built on shaky legal ground from the start. Once May First explained its role, Higbee should have withdrawn its demand. Individuals and small nonprofits shouldn’t need lawyers just to stop aggressive copyright shakedowns.

Statutory Damages Fuel Copyright Abuse

This isn’t an isolated case—it’s a predictable result of copyright law’s statutory damages regime.

Statutory damages can reach $150,000 per work, regardless of actual harm. That enormous leverage incentivizes firms like Higbee to send mass demand letters seeking quick settlements. Even meritless claims can generate revenue when recipients are too afraid, confused, or resource-constrained to fight back.

This hits community organizations, independent publishers, and small service providers that don’t have in-house legal teams especially hard. Faced with the threat of ruinous statutory damages, many just pay what is demanded.

That’s not how copyright law should work.

Know Your Rights

If you receive a copyright demand based on material someone else posted, don’t assume you’re liable.

You may have defenses based on:

  • Your role as a hosting or service provider
  • Lack of volitional conduct
  • Prompt removal of the material after notice
  • The statute of limitations
  • The copyright owner’s failure to timely register the work
  • The absence of actual damages

Every situation is different, but the key point is this: a demand letter is not the same as a valid legal claim.

Standing Up to Copyright Trolls

May First stood its ground, and Higbee abandoned its demand after we explained the law.

But the bigger problem remains. Copyright’s statutory damages framework enables aggressive enforcement tactics that targets the wrong parties, and chills lawful online activity.

Until lawmakers fix these structural incentives, organizations and individuals will keep facing pressure to pay up—even when they’ve done nothing wrong.

If you get one of these demand letters, remember: you may have more rights than it suggests.

Print Blocking Won't Work - Permission to Print Part 2

Thu, 04/02/2026 - 1:57pm

This is the second post in a series on 3D print blocking, for the first entry check out: Print Blocking is Anti-Consumer - Permission to Print Part 1

Legislators across the U.S. are proposing laws to force “blueprint blockers” on 3D printers sold in their states. This mandated censorware is doomed to fail for its intended purpose, but will still manage to hurt the professional and hobbyist communities relying on these tools.

3D printers are commonly used to repair belongings, decorate homes, print figurines, and so much more. It’s not just hobbyists; 3D printers are also used professionally for parts prototyping and fixturing, small-batch manufacturing, and workspace organization. In rare cases, they’ve also been used to print parts needed for firearm assembly.

Many states have already banned manufacturing firearms using computer controlled machine tools, which are called “Computer Numerical Control or CNC machines,” and 3D printers without a license. Recently proposed laws seek to impose technical limitations onto 3D printers (and in some cases, CNC machines) in the hope of enforcing this prohibition.

This is a terrible idea; these mandates will be onerous to implement and will lock printer users into vendor software, impose one-time and ongoing costs on both printer vendors and users, and lay the foundation for a 3D-print censorship platform to be used in other jurisdictions. We dive more into these issues in the first part of this series.

On a pragmatic level, however, these state mandates are just wishful thinking. Below, we dive into how 3D printing works, why these laws won’t deter the printing of firearms, and how regular lawful use will be caught in the proposed dragnet.

How 3D Printers Work

To understand the impact of this proposed legislation, we need to know a bit about how 3D printers work. The most common printers work similarly to a computer-controlled hot glue gun on a motion platform; they follow basic commands to maintain temperature, extrude (push) plastic through a nozzle, and move a platform. These motions together build up layers to make a final “print.” Modern 3D printers often offer more features like Wi-Fi connectivity or camera monitoring, but fundamentally they are very simple machines.

The basic instructions used by most 3D printers are called Geometric Code, or G-Code, which specify very basic motions such as “move from position A to position B while extruding plastic.” The list of commands that will eventually print up a part are transferred to the printer in a text file thousands-to-millions of lines long. The printer dutifully follows these instructions with no overall idea of what it is printing.

While it is possible to write G-Code by hand for either a CNC machine or a 3D printer, the vast majority is generated by computer aided manufacturing (CAM) software, often called a “slicer” in 3D printing since it divides a 3D model into many 2D slices then generates motion instructions. 

This same general process applies to CNC machines which use G-Code instructions to guide a metal removal tool. CNC machines have been included in previous prohibitions on firearm manufacturing and file distribution and are also targeted in some of these bills.

There are other types of 3D printers such as those that print concrete, resin, metal, chocolate and other materials using slightly different methods. All of these would be subject to the proposed requirements regardless of how unlikely doing harm with a gun made out of chocolate would be. 

Simple rectangular 3D model for test fit

Part of a 173490 line long G-Code file produced by slicer for simple rectangular model.

Part of a 173,490 line long G-Code file for a simple rectangular part.

How is Firearm Detection Supposed to Work?

Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software. These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale.

Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support. Owners of existing noncompliant 3D printers in regulated states will be unable to resell their printers on the secondary market legally.

What Will Actually Happen?

While the proposed laws allow for scanning to happen on either the printer itself or in the slicer software, the reality is more complicated. 

The computers inside many 3D printers have very limited computational and storage ability; it will be impossible for the printer’s computer to render the G-Code into a 3D model to compare with the database of prohibited files. Thus the only way to achieve this through the machine would be to upload all printer files to a cloud comparison tool, creating new delays, errors, and unacceptable invasions of privacy.

Many vendors will instead choose to permanently link their printers to a specific slicer that implements firearm detection. This requires cryptographic signing of G-Code to ensure only authorized prints are completed, and will lock 3D printer owners into the slicer chosen by their printer vendor.

Regardless of the specifics of their implementation, these algorithms will interfere with 3D printers' ability to print other parts without actually stopping manufacture of guns. It takes very little skill for a user to make slight design tweaks to either a model or G-Code to evade detection. One can also design incomplete or heavily adorned models which can be made functional with some post-print alterations. While this would be pioneered by skilled users—like the ones who designed today’s 3D printed guns—once the design and instructions are out there anyone able to print a gun today will be able to follow suit.  

Firearm part identification features also impose costs onto 3D printer manufacturers, and hence their end consumers. 3D printer manufacturers must develop or license these costly algorithms and continuously maintain and update both the algorithm and the database of firearm models. Older printers that cannot comply will not be able to be resold in states where they are banned, creating additional E-waste.

While those wishing to create guns will still be able to do so, people printing other functional parts will likely be caught up in these algorithms, particularly for things like film props, kids’ toys, or decorative models, which often closely resemble real firearms or firearm components.

What Are The Impacts of These Changes?

Technological restrictions on manufacturing tools’ abilities are harmful for many reasons. EFF is particularly concerned with this regulation locking a 3D printer to proprietary vendor software. Vendors will be able to use this mandate to support only in-house materials, locking users into future purchases. Vendor slicer software is often based on out-of-date, open source software, and forcing users to use that software deprives them of new features or even use of their printer altogether if the vendor goes out of business. At worst, some of these bill will make it a misdemeanor to fix those problems and gain full control of your printer.

File-scanning frameworks required by this regulation will lay the foundation for future privacy and freedom intrusions. This requirement could be co-opted to scan prints for copyright violations and be abused similar to DMCA takedowns, or to suppress models considered obscene by a patchwork of definitions. What if you were unable to print a repair part because the vendor asserted the model was in violation of their trademark? What if your print was considered obscene?

Regardless of your position on current prohibitions on firearms, we should all fight back against this effort to force technological restrictions on 3D printers, and legislators must similarly abandon the idea. These laws impose real costs and potential harms among lawful users, lay the groundwork for future censorship, and simply won’t deter firearm printing. 

Print Blocking is Anti-Consumer - Permission to Print Part 1

Thu, 04/02/2026 - 1:56pm

This is the first post in a series on 3D print blocking, for the next entry check out Print Blocking Won't Work - Permission to Print Part 2

When legislators give companies an excuse to write untouchable code, it’s a disaster for everyone. This time, 3D printers are in the crosshairs across a growing number of states. Even if you’ve never used one, you’ve benefited from the open commons these devices have created—which is now under threat.

This isn’t the first time we’ve gone to bat for 3D printing. These devices come in many forms and can construct nearly any shape with a variety of materials. This has made them absolutely crucial for anything from life-saving medical equipment, to little Iron Man helmets for cats, to everyday repairs. For decades these devices have been a proven engine for innovation, while democratizing a sliver of manufacturing for hobbyists, artists, and researchers around the world.

For us all to continue benefiting from this grassroots creativity, we need to guard against the type of corporate centralization that has undermined so much of the promise of the digital era.  Unfortunately some state legislators are looking to repeat old mistakes by demanding printer vendors install an enshittification switch.

In the U.S, three states have recently proposed that commercial 3D-printer manufacturers must ensure their printers only work with their software, and are responsible for checking each print for forbidden shapes—for now, any shape vendors consider too gun-like. The 2D equivalent of these “print-blocking” algorithms would be demanding HP prevent you from printing any harmful messages or recipes. Worse still, some bills can introduce criminal penalties for anyone who bypasses this censorware, or for anyone simply reselling their old printer without these restrictions. 

If this sounds like Digital Rights Management (DRM) to you, you’ve been paying attention. This is exactly the sort of regulation that creates a headache and privacy risk for law-abiding users, is a gift for would-be monopolists, and can be totally bypassed by the lawbreakers actually being targeted by the proposals.

Ghosting Innovation

“Print blocking” is currently coming for an unpopular target: ghost guns. These are privately made firearms (PMFs) that are typically harder to trace and can bypass other gun regulations. Contrary to what the proposed regulations suggest, these guns are often not printed at home, but purchased online as mass-produced build-it-yourself kits and accessories.

Scaling production with consumer 3D printers  is expensive, error-prone, and relatively slow.  Successfully making a working firearm with just a printer still requires some technical know-how, even as 3D printers improve beyond some of these limitations. That said, many have concerns about unlicensed firearm production and sales. Which is exactly why these practices are already illegal in many states, including all of the states proposing print blocking. 

Mandating algorithmic print-blocking software on 3D printers and CNC machines is just wishful thinking. People illegally printing ghost guns and accessories today will have no qualms with undetectably breaking another law to bypass censoring algorithms. That’s if they even need to—the cat and mouse game of detecting gun-like prints might be doomed from the start, as we dive into in this companion post.

Meanwhile, the overwhelming majority of 3D-printer users do not print guns. Punishing innovators, researchers, and hobbyists because of a handful of outlaws is bad enough, but this proposal does it by also subjecting everyone to the anticompetitive and anticonsumer whims of device manufacturers.

Can’t make the DRM thing work

We’ve been railing against Digital Rights Management (DRM) since the DMCA made it a federal crime to bypass code restricting your use of copyrighted content. The DRM distinction has since been weaponized by manufacturers to gain greater leverage over their customers and enforce anti-competitive practices

The same enshittification playbook applies to algorithmic print blockers. 

Restricting devices to manufacturer-provided software is an old tactic from the DRM playbook, and is one that puts you in a precarious spot where you need to bend to the whims of the manufacturer.  Only Windows 11 supported? You need a new PC. Tools are cloud-based? You need a solid connection. The company shutters? You now own an expensive paperweight—which used to make paperweights.

It also means useful open source alternatives which fit your needs better than the main vendor’s tools are off the table. The 3D-printer community got a taste of this recently, as manufacturer Bambu Labs pushed out restrictive firmware updates complicating the use of open source software like OrcaSlicer. The community blowback forced some accommodations for these alternatives to remain viable. Under the worst of these laws, such accommodations, and other workarounds, would be outlawed with criminal penalties.

People are right to be worried about vendor lock-in, beyond needing the right tool for the job. Making you reliant on their service allows companies to gradually sour the deal. Sometimes this happens visibly, with rising subscription fees, new paywalls, or planned obsolescence. It can also be more covert, like collecting and selling more of your data, or cutting costs by neglecting security and bug fixes.

With expensive hardware on the line, they can get away with anything that won’t make you pay through the nose to switch brands.

Indirectly, this sort of print-blocking mandate is a gift to incumbent businesses making these printers. It raises the upfront and ongoing costs associated with smaller companies selling a 3D printer, including those producing new or specialized machines. The result is fewer and more generic options from a shrinking number of major incumbents for any customer not interested in building their own 3D printer.

Reaching the Melting Point

It’s already clear these bills will be bad for anyone who currently uses a 3D printer, and having alternative software criminalized is particularly devastating for open source contributors. These impacts to manufacturers and consumers culminate into a major blow to the entire ecosystem of innovation we have benefited from for decades. 

But this is just the beginning. 

Once the infrastructure for print blocking is in place, it can be broadened. This isn’t a block of a very specific and static design, like how some copiers block reproductions of currency. Banning a category of design based on its function is a moving target, requiring a constantly expanding blacklist. Nothing in this legislation restricts those updates to firearm-related designs. Rather, if we let proposals like this pass, we open the door to the database of forbidden shapes for other powerful interests.

Intellectual property is a clear expansion risk. This could look like Nintendo blocking a Pikachu toy, John Deere blocking a replacement part, or even patent trolls forcing the hand of hardware companies. Repressive regimes, here or abroad, could likewise block the printing of "extreme" and “obscene” symbols, or tools of resistance like popular anti-ICE community whistles

Finally, even the most sympathetic targets of algorithmic censorship will result in false positives—blocking 3D-printer users’ lawful expression. This is something proven again and again in online moderation. Whether by mistake or by design, a platform that has you locked in has little incentive to offer remedies to this censorship. And these new incentives for companies to surveil each print can also impose a substantial chilling effect on what the user chooses to create.

While 3D printers aren’t in most households, this form of regulation would set a dangerous precedent. Government mandating on-device censors which are maintained by corporate algorithms is bad. It won’t work. It consolidates corporate power. It criminalizes and blocks the grassroots innovation and empowerment which has defined the 3D-printer community. We need to roundly reject these onerous restraints on creation. 

Google and Amazon: Acknowledged Risks, And Ignored Responsibilities

Thu, 04/02/2026 - 11:12am

In late 2024, we urged Google and Amazon to honor their human rights commitments, to be more transparent with the public, and to take meaningful action to address the risks posed by Project Nimbus, their cloud computing contract that includes Israel’s Ministry of Defense and the Israeli Security Agency. Since then, a stream of additional reporting has reinforced that our concerns were well-founded. Yet despite mounting evidence of serious risk, both companies have refused to take action. 

Amazon has completely ignored our original and follow-up letters. Google, meanwhile, has repeatedly promised to respond to our questions. Yet more than a year and a half later, we have seen no meaningful action by either company. Neither approach is acceptable given the human rights commitments these companies have made.

Additionally, Microsoft required a public leak before it felt compelled enough to look into and find that its client, the Israeli government, was indeed misusing its services in ways that violated Microsoft’s public commitments to human rights. This should have given both Google and Amazon an additional reason to take a close look and let the public know what they find, but nothing of the sort materialized. 

In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.

Google: Known Risks, No Meaningful Action

Google’s own internal assessments warned of the risks associated with Project Nimbus even before the contract was signed. Major news outlets have reported that Google provides the Israeli government with advanced cloud and AI services under Project Nimbus, including large-scale data storage, image and video analysis, and AI model development tools. These capabilities are exceptionally powerful, highly adaptable, and well suited for surveillance and military applications.

Despite those warnings, and the multiple reports since then about human rights abuses by the very portions of the Israeli government that uses Google’s and Amazon’s services, the companies continue to operate business as usual. It seems that they have taken the position that they do not need to change course or even publicly explain themselves unless the media or other external organizations present definitive proof that their tools have been used in specific violations of international human rights or humanitarian law. While that conclusive public evidence has not yet emerged for all the companies, the risks are obvious, and they are aware of them. Instead of conducting robust, transparent human rights due diligence, Amazon and Google are continually choosing to look the other way.

Google’s own internal assessments undermine its public posture. According to reporting, Google’s lawyers and policy staff warned that Google Cloud services could be linked to the facilitation of human rights abuses. In the same report, Google employees also raised concerns that the company’s cloud and AI tools could be used for surveillance or other militarized purposes, which seems very likely given the Israeli government’s long-standing reliance on advanced data-driven systems to control and monitor Palestinians.

Google has publicly claimed that Project Nimbus is “not directed at highly sensitive, classified, or military workloads” and is governed by its standard Acceptable Use Policies. Yet reporting has revealed conflicting representations about the contract’s terms, including indications that the Israeli government may be permitted to use any services offered in Google’s cloud catalog for any purpose. Google has declined to publicly resolve these contradictions, and its lack of transparency is problematic. The gap between what Google says publicly and what it knows internally should alarm anyone who hopes to take the company’s human rights commitments seriously.

Google’s and Amazon’s AI Principles Require Proactive Action

Even after being revised last year, Google’s AI Principles continue to commit the company to responsible development and deployment of its technologies, including implementing appropriate human oversight, due diligence, and safeguards to mitigate harmful outcomes and align with widely accepted principles of international law and human rights. While the updated principles no longer explicitly commit Google to avoiding entire categories of harmful use, they still require the company to assess foreseeable risks, employ rigorous monitoring and mitigation measures, and act responsibly throughout the full lifecycle of AI development and deployment.

Amazon has similarly committed to responsible AI practices through its Responsible AI framework for AWS services. The company states that it aims to integrate responsible AI considerations across the full lifecycle of AI design, development and operation, emphasizing safeguards such as fairness, explainability, privacy and security, safety, transparency, and governance. Amazon also says its AI services are designed with mechanisms for monitoring, and risk mitigation to help prevent harmful outputs or misuse and to enable responsible deployment across a range of use cases.

Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice.

Here, the risks are neither speculative nor remote. They are foreseeable, well-documented, and exacerbated by the context in which Project Nimbus operates, which is an ongoing military campaign marked by widespread civilian harm and credible allegations of grave human rights violations including genocide. In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.

Modern cloud and AI systems are designed to be flexible, customizable, and deployable at scale, often beyond the vendor’s direct visibility. That reality is precisely why human rights due diligence must be proactive. Waiting for a leaked document or whistleblower account demonstrating direct misuse, as occurred in Microsoft’s case, means waiting until harm has already been done.

Microsoft’s Experience Should Have Been Warning Enough

As noted above, the recent revelations about Microsoft’s technologies being misused in violation of Microsoft’s commitments by the Israeli military illustrate the dangers of this wait-and-see approach. Google and Amazon should not need a similar incident to recognize what is at stake. The demonstrated misuse of comparable technologies, combined with Google’s and Amazon’s own knowledge of the risks associated with Project Nimbus, should already be sufficient to trigger action.

The appropriate response is to act responsibly and proactively.

Google and Amazon should immediately:

  • Conduct and publish an independent human rights impact assessment of Project Nimbus.
  • Disclose how they evaluate, monitor, and enforce compliance with their AI Principles in high-risk government contracts, including and especially in Project Nimbus.
  • Commit to suspending or restricting services where there is a credible risk of serious human rights harm, even if definitive proof of misuse has not yet emerged.
Waiting Is a Choice, and Not One That Protects Human Rights

Google and Amazon publicly emphasize their commitment to responsible AI and respect for human rights. Those commitments are meaningless if they apply only once harm is undeniable and irreversible. In conflict settings, especially where secrecy and information asymmetry are the norm, companies must act on credible risk, not perfect evidence.

Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice, and one that carries real consequences for people whose lives are already at risk.

EFF’s Submission to the UN OHCHR on Protection of Human Rights Defenders in the Digital Age

Thu, 04/02/2026 - 7:29am

Governments around the world are adopting new laws and policies aimed at addressing online harms, including laws intended to curb cybercrime and disinformation, and ostensibly protect user safety. While these efforts are often framed as necessary responses to legitimate concerns, they are increasingly being used in ways that restrict fundamental rights.

In a recent submission to the United Nations Office of the High Commissioner for Human Rights, we highlighted how these evolving regulatory approaches are affecting human rights defenders (HRDs) and the broader digital environment in which they operate.

Threats to Human Rights Defenders

Across multiple regions, cybercrime and national security laws are being applied to prosecute lawful expression, restrict access to information, and expand state surveillance. In some cases, these measures are implemented without adequate judicial oversight or clear safeguards, raising concerns about their compatibility with international human rights standards.

Regulatory developments in one jurisdiction are also influencing approaches elsewhere. The UK’s Online Safety Act, for example, has contributed to the global diffusion of “duty of care” frameworks. In other contexts, similar models have been adopted with fewer protections, including provisions that criminalize broadly defined categories of speech or require user identification, increasing risks for those engaged in the defense of human rights.

At the same time, disruptions to internet access—including shutdowns, throttling, and geo-blocking—continue to affect the ability of HRDs to communicate, document abuses, and access support networks. These measures can have significant implications not only for freedom of expression, but also for personal safety, particularly in situations of conflict or political unrest.

The expanded use of digital surveillance technologies further compounds these risks. Spyware and biometric monitoring systems have been deployed against activists and journalists, in some cases across national borders. These practices result in intimidation, detention, and other forms of retaliation.

The practices of social media platforms can also put human rights defenders—and their speech—at risk. Content moderation systems that rely on broadly defined policies, automated enforcement, and limited transparency can result in the removal or suppression of speech, including documentation of human rights violations. Inconsistent enforcement across languages and regions, as well as insufficient avenues for redress, disproportionately affects HRDs and marginalized communities.

Putting Human Rights First

These trends underscore the importance of ensuring that regulatory and corporate responses to online harms are grounded in human rights principles. This includes adopting clear and narrowly tailored legal frameworks, ensuring independent oversight, and providing effective safeguards for privacy, expression, and association.

It also requires meaningful engagement with civil society. Human rights defenders bring essential expertise on the local and contextual impacts of digital policies, and their participation is critical to developing effective and rights-respecting approaches.

As digital technologies continue to shape civic space, protecting the individuals and communities who rely on them to advance human rights remains an urgent priority.

You can read our full submission here.

Digital Hopes, Real Power: From Revolution to Regulation

Wed, 04/01/2026 - 9:20am

This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings.

From Russia—where wartime censorship and more stringent platform controls have choked dissenting voices—to Nigeria, with its aggressive takedown orders turning social media into political battlegrounds, and to Turkey, where sweeping “disinformation” laws have made platforms heavily policed spaces, freedom of expression online is under attack. Per Freedom House’s 2023 Freedom on the Net Report, 66% of internet users live where political or social sites are blocked, and 78% are in countries where people have been arrested for online posts. New social media regulations have emerged in dozens of countries in the past year alone.

The online landscape looks markedly different than it did fifteen years ago. Back then, social media was still new and largely free from legal restrictions: platforms moderated content in response to user reports, governments rarely targeted them directly, and blocks (when they happened) were temporary, with censorship mostly focused on whole websites that VPNs or proxies could easily bypass. The internet was far from free, but governments’ crude tactics left space for circumvention.

Those early restrictions, as crude as they were, marked the start of a rapid evolution in online censorship. Governments like Thailand, which blocked thousands of YouTube videos in 2007 over critical content, and Turkey, which demanded takedowns from YouTube before blocking the site entirely, tested legal and technical pressures to mute dissent and force platforms’ compliance. By 2011, governments weren't just reacting—they had learned to pressure platforms into becoming instruments of state censorship, shifting their playbooks from blunt blocks to sophisticated systems of control that simple VPNs could no longer reliably bypass. Governments across the region were watching closely, and by the time the 2011 uprisings began, they were prepared to respond.

Looking Back

After learning that a Facebook page—We Are All Khaled Said, honoring a young man killed by police brutality—sparked Egypt’s street protests, Western media hailed online platforms as engines of democracy. Revolution co-creator Wael Ghonim told a journalist: “This revolution started on Facebook.” That claim was debated and contested for years; critically, Facebook had suspended the page two months earlier over pseudonyms violating its real-name policy, restoring it only after advocates intervened. 

Once the protests moved to the streets, Egypt’s government—alert to social media’s power—quickly blocked Facebook and Twitter, then enacted a near-total shutdown (more on that in part 4 of this series). As history shows, the measures didn’t stop the revolution, and Egyptian president Hosni Mubarak stepped down. For a brief moment, freedom appeared to be on the horizon. Unfortunately, that moment was short-lived.

Egypt’s Digital Dystopia

Just as the Egyptian military government quashed revolution in the streets, they also shut down  online civic space. Today, Egypt’s internet ranks low on markers of internet freedom. The military government that has ruled Egypt since 2013 has imprisoned human rights defenders and enacted laws—including 2015’s Counter-terrorism Law and 2018’s Cybercrime Law—that grant the state broad authority to suppress speech and prosecute offenders.

The 2018 law demonstrates the ease with which cybercrime laws can be abused. Article 7 of the law allows for websites that constitute “a threat to national security” or to the “national economy” to be blocked. The Association of Freedom of Thought and Expression (AFTE) has criticized the loose definition of “national security” contained within the law, as “everything related to the independence, stability, security, unity and territorial integrity of the homeland.” Notably, individuals can also be penalized—and sentenced to up to six months imprisonment—for accessing banned websites.

Articles 25, which prohibits the use of technology to “infringe on any family principles or values in Egyptian society,” and 26, which prohibits the dissemination of material that “violates public morals,” have been used in recent years to prosecute young people who use social media in ways in which the government disapproves. Many of those prosecuted have been young women; for instance, belly dancer Sama Al Masry was sentenced to three years in prison and fined 300,000 Egyptian pounds under Article 26.

Beyond Egypt: Regional Trends

Egypt’s trajectory reflects a wider regional and global pattern. In the years following the uprisings, governments moved quickly to formalize legal authority over digital space, often under the banner of combating cybercrime, terrorism, or “false information.” These laws often contain vaguely worded provisions criminalizing “misuse of social media” or “harming national unity,” giving authorities wide discretion to prosecute speech.

In Qatar and Bahrain, a social media post can result in up to five years in jail. In 2018, prominent Bahraini human rights defender Nabeel Rajab was convicted of “spreading false rumours in time of war”, “insulting public authorities”, and “insulting a foreign country” for tweets he posted about the killing of civilians in Yemen and sentenced to five years imprisonment

Two years later, Qatar amended its penal code by setting criminal penalties for spreading “fake news.” Article 136 (bis) sets criminal penalties for broadcasting, publishing, or republishing “rumors or statements or false or malicious news or sensational propaganda, inside or outside the state, whenever it is intended to harm national interests or incite public opinion or disturb the social or public order of the state” and sets a punishment of a maximum of five years in prison, and/or 100,000 Qatari riyals. The penalty is doubled if the crime is committed in wartime.

Now, as war has once again reached the region, these laws are being put to the test. Bahraini authorities have arrested at least 100 people in relation to protests or expression related to the war, while Qatar has arrested more than 300 people on charges of spreading “misleading information.”

And in the UAE, at least 35 people—most or all of whom are foreign nationals—have been arrested and “accused of spreading misleading and fabricated content online that could harm national defence efforts and fuel public panic,” according to the Times of India. The arrests fall under the UAE’s 2022 Federal Decree Law No. 34 on Combating Rumours and Cybercrimes which—says Human Rights Watch—is, along with the country’s Penal Code, “used to silence dissidents, journalists, activists, and anyone the authorities perceived to be critical of the government, its policies, or its representatives.”

From Regional Practice to Global Pattern

Today roughly four out of five countries worldwide have enacted cybercrime legislation, a dramatic expansion over the past decade, with many governments adopting or revising such laws in the years following the Arab uprisings. 

Outside the region, other nations have repurposed these laws to police speech. In Nigeria, journalists have been detained under the Cybercrime Act, with dozens of prosecutions documented since 2015. Bangladesh’s Digital Security Act has been used in thousands of cases—including hundreds against journalists—while in Uganda, authorities have prosecuted political critics under computer misuse laws for social media posts. 

Cybercrime laws are only one piece of a broader toolkit that governments now deploy to control digital spaces. Over the past decade, authorities have introduced sweeping “disinformation” laws, platform liability rules, age verification laws, and data localization requirements that force companies to store data domestically or appoint legal representatives within national jurisdictions. These measures give governments leverage over global technology firms, enabling them to demand faster content removals, obtain user data, or threaten steep fines and throttling if platforms fail to comply. Rather than relying solely on blunt instruments like blocking entire websites, states increasingly govern speech through layered regulatory systems that pressure platforms to police users on the state’s behalf.

The platforms too have changed. The same social media companies that were once championed as tools of democratic mobilization now operate in more constrained environments—and often act as willing participants in repressing speech. Facing financial penalties and the prospect of being blocked entirely, many companies expanded compliance with takedown requests after 2011, as can be seen in the companies’ own transparency reports. They later invested heavily in automated technologies that remove vast quantities of content before it is ever publicly available.

Rights groups around the world, including EFF, have warned that these dynamics disproportionately impact historically marginalized and vulnerable groups, as well as journalists and other human rights defenders. Research by the Palestinian digital rights organization 7amleh and reporting by Human Rights Watch have documented how content moderation policies, government pressure, and opaque enforcement mechanisms increasingly converge—leaving activists, journalists, and human rights defenders caught between state censorship and platform governance.

The New Architecture of Repression

Looking back now, it’s clear that, fifteen years ago, governments were caught off guard. They crudely blocked platforms, shut down networks, and scrambled to contain movements they did not fully understand. But in the years since, states have systematically adapted, transforming what were once reactive measures into durable systems of control.

Today’s controls are embedded in law, outsourced to platforms, and justified through the language of security, safety, and order. Cybercrime statutes, disinformation frameworks, and platform regulations form a layered architecture that allows states to shape online expression at scale while maintaining a veneer of legality. In this system, repression is often procedural, bureaucratic, and continuous.

The question is no longer whether the internet can enable dissent, but whether it can still sustain it under these conditions.

This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

Welcome, Daily Show Viewers! Learn More About EFF and Privacy's Defender

Mon, 03/30/2026 - 11:17pm
About EFF

The Electronic Frontier Foundation is the leading nonprofit defending civil liberties in the digital world. EFF’s work to protect your rights on the internet is supported by over 30,000 members who have joined our mission by donating just this year.

For over 35 years, our lawyers, activists, and technologists have been thinking about the next big thing in tech before anyone else—whether that’s age verification, AI, or Palantir. Whatever causes you fight for, you rely on the internet to do so. And EFF protects the infrastructure of rebellion. 

JOIN EFF TODAY

To learn more about our work, follow EFF on social media and subscribe to EFF's EFFector newsletter below to learn about the ways the internet and online rights are changing and what that means for you. And join EFF to support our fight—because if you use technology, this fight is yours. 

Privacy's Defender: My Thirty Year Fight Against Digital Surveillance, by Cindy Cohn

In Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press), EFF Executive Director Cindy Cohn weaves her own personal story with her role as a leading legal voice representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders.

"Let's Sue the Government" T-Shirt

[

Sometimes our supporters call EFF a merch store with a law firm attached because our stickers, hoodies and shirts are so well known. Our "Let's Sue the Government" shirt tells people: When your rights are at risk, you don’t stay quiet.

EFF's History

In early 1990, the U.S. Secret Service conducted raids tracking the distribution of a document illegally copied from a telecom company’s computer; one of those targeted was an Austin, TX publisher named Steve Jackson, whose computers were seized but later returned without any charges filed. Jackson’s business had suffered, and he discovered that the government had read and deleted his customers’ emails. He sought a civil liberties organization to represent him for this violation of his rights, but no existing organization understood the technology well enough to grasp the free speech and privacy issues at hand.

But a few well-informed technologists did understand. Mitch Kapor, former president of Lotus Development Corp.; John Perry Barlow, a Wyoming cattle rancher and lyricist for the Grateful Dead; and John Gilmore, an early employee of Sun Microsystems, with help from Apple co-founder Steve Wozniak, decided to do something about it – and so the Electronic Frontier Foundation was born in July 1990. The Steve Jackson Games case turned out to be an extremely important one for the early internet: For the first time, a court held that electronic mail deserves at least as much protection as telephone calls.

EFF's original logo, in use from 1990-2018

EFF continued to take on cases that set important precedents for the treatment of rights in cyberspace. In our second big case, Bernstein v. U.S. Department of Justice, the United States government prohibited a University of California mathematics Ph.D. student from publishing online an encryption program he had created. Years earlier, the government had placed encryption on the United States Munitions List, alongside bombs and flamethrowers, as a weapon to be regulated for national security purposes; our lawsuit established that written software code is speech protected by the First Amendment, and the further ruled that the export control laws on encryption violated Bernstein's rights by prohibiting his constitutionally protected speech.  Now everyone has the right to "export" encryption software—by publishing it on the Internet—without prior permission from the U.S. government. 

Since then we’ve fought against government and corporate abuses of our Constitutional rights, on issues including warrantless wiretapping by intelligence agencies, the panopticon of street-level surveillance that seeks to track everything we do, and the corporate surveillance that turns our clicks into their commodity, as well as issues of antitrust and intellectual property, artificial intelligence, cybersecurity, and much more. We are lawyers, technologists, activists, and lobbyists who work every day for the privacy, security and dignity of all who use technology - and if you use technology, this fight is yours, too.

EFF's Greatest Hits

While many early battles over the right to communicate freely and privately stemmed from government censorship, today EFF is fighting for users on many other fronts as well.

Today, certain powerful corporations are attempting to shut down online speech, prevent new innovation from reaching consumers, and facilitating government surveillance. We challenge corporate overreach just as we challenge government abuses of power.

JOIN EFF TODAY

We also develop technologies that can help individuals protect their privacy and security online, which our technologists build and release freely to the public for anyone to use.

In addition, EFF is engaged in major legislative fights, beating back digital censorship bills disguised as intellectual property proposals, opposing attempts to force companies to spy on users, championing reform bills that rein in government surveillance, documenting police technology and where it's used, helping users protect themselves from surveillance, and much more.

Learn more about some of EFF's most impactful work— Download a PDF of our new catalog, "Now That's What I Call Digital Rights!

EFF's Cindy Cohn on The Daily Show! Tonight Monday, March 30

Mon, 03/30/2026 - 11:12am

EFF Executive Director Cindy Cohn will be on The Daily Show tonight, Monday March 30, at 11 pm ET and PT, speaking with host Jon Stewart. Cindy will discuss her long history of fighting for privacy online and her new book, Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance (MIT Press). The book details her own personal story alongside her role representing the rights and interests of technology users, innovators, whistleblowers, and researchers during the Crypto Wars of the 1990s, battles over NSA’s dragnet internet spying revealed in the 2000s, and the fight against FBI gag orders. 

You can watch the interview on Comedy Central, and extended episodes are released shortly thereafter on Paramount Plus as well as in segments on YouTube. We will also share the interview when it is uploaded and available online as well. 

About The Daily Show

The Daily Show is a long-running comedy news show that covers the biggest headlines of the day. It has won 26 Primetime Emmy Awards and has introduced the world to now well-known actors and comedians such as Steve Carell, Samantha Bee, Ed Helms, and Trevor Noah, as well as hosts of their own current shows, Stephen Colbert and John Oliver. 

US Tech Companies Must be Accountable in US Courts for Facilitating Persecution and Torture Abroad, EFF Urges US Supreme Court

Fri, 03/27/2026 - 6:07pm
Cisco Systems Case Has Major Implications for Global Human Rights

SAN FRANCISCO – U.S. technology companies should be legally accountable in U.S. courts for building tools that purposefully and actively facilitate human rights abuses by foreign governments, the Electronic Frontier Foundation argued in a brief filed Friday to the U.S. Supreme Court

The brief filed in the case of Cisco Systems, Inc., et al., v. Doe I, et al. urges the high court to uphold the U.S. Court of Appeals for the 9th Circuit’s 2023 ruling that U.S. corporations can be held liable under the Alien Tort Statute (ATS) – a law that lets noncitizens bring claims in U.S. federal court for international law violations – for taking actions in the U.S. that aided and abetted persecution and torture abroad. 

“This is not a case about a company that merely provided routers or other general-purpose technologies to a foreign government. It is about a company that purposefully and actively assisted in the persecution of a religious group,” the brief says. “There is a growing set of companies—including American companies—that provide surveillance technologies that are vulnerable to, and indeed are being used to, support gross human rights abuses. Because of this, the outcome of this case will have profound implications for millions of people who rely on digital technologies in their everyday lives, including to practice their religion.” 

The “Golden Shield” system that Cisco custom-built for the Chinese government was an essential component of persecution against the Falun Gong religious group—persecution that included online spying and tracking, detention, and torture. Victims reported that intercepted communications were used during torture sessions aimed at forcing them to renounce their religion. Falun Gong victims and their families sued Cisco in 2011 and a federal district judge dismissed the case in 2014. The case was delayed three times as the Supreme Court considered three prior ATS cases.   

The 9th Circuit appeals court – after proceedings including an amicus brief from EFF – reversed that lower decision, holding that U.S. corporations can be held liable under the ATS for aiding and abetting human rights abuses abroad. It also held that a company does not need to have the “purpose” to facilitate human rights abuses in order to be held liable; it only needs to have “knowledge” that its assistance helped in such abuses. It then held that the plaintiffs’ allegations showed that Cisco’s actions met both standards. The court also held that the fact that a technology has legitimate uses does not shield a company from liability for other uses that led to human rights abuses when the standards of international law are met. Taken cumulatively, Cisco’s actions in the U.S. were sufficient to allow the case to proceed, the 9th Circuit ruled.  

Cisco appealed to the Supreme Court, which granted review in January. The case, No. 24-856, is scheduled for argument on April 28. 

Cisco Systems is just one of many U.S. companies that make surveillance systems, spyware, and other products used by governments to violate people’s human rights. 

“This Court must not shut the courthouse door to victims of human rights abuses that are actively powered by American corporations,” the brief says. “In the digital age, repressive governments rarely act alone to violate human rights. They have accomplices—including technology companies that have the sophistication and technical know-how that those repressive governments lack.” 

For EFF’s amicus brief to the U.S. Supreme Court:  https://www.eff.org/document/2026-03-27-eff-amicus-brief-cisco-v-doe-scotus

For EFF’s Doe I v. Cisco case page: https://www.eff.org/cases/doe-i-v-cisco  

For the U.S. Supreme Court docket: https://www.supremecourt.gov/docket/docketfiles/html/public/24-856.html  

 

Contact:  SophiaCopeSenior Staff Attorneysophia@eff.org CindyCohnExecutive Directorcindy@eff.org

Traffic Violation! License Plate Reader Mission Creep Is Already Here

Thu, 03/26/2026 - 4:19pm

A new report from 404 Media sheds light on how automated license plate readers (ALPRs) could be used beyond the press releases and glossy marketing materials put out by law enforcement agencies and ALPR vendors. In December 2025, Georgia State Patrol ticketed a motorcyclist for holding a cell phone in his hand. According to the report, the ticket read, “CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.” 

If you’re thinking that this sounds outside of the scope of what ALPRs are supposed to do, you’re right. In November 2025, Flock Safety, the maker of the ALPR in question, wrote a post about how they definitely are in compliance with the Fourth Amendment to the U.S. Constitution. In this post, which highlighted what ALPRs are and what they are not, the company writes: “What it is not: Flock ALPR does not perform facial recognition, does not store biometrics, cannot be queried to find people, and is not used to enforce traffic violations.” (emphasis added)

Well, apparently their customers never got the memo and apparently the technology’s design does not explicitly prevent behavior the company officially and publicly disavows. 

Or at least this used to be the case: Flock now lists six different companies providing traffic enforcement technology on its “Partner program”  site. Public records also show that speed enforcement cameras have been connected to Flock's ALPR network. 

EFF and other privacy advocates have long warned about mission creep when it comes to surveillance infrastructure. Police often swear that a piece of technology will only be used in a particular set of circumstances or to fight only the most serious crimes only to utilize it to fight petty crimes or watch protests.  

We continue to urge cities, states, and even companies to end their relationship with Flock Safety because of the incompatibility between the mass surveillance it enables and its inability to protect civil liberties—including preventing mission creep.

Pages