EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 6 hours 4 min ago

The Department of Defense Wants Less Proof its Software Works

Fri, 10/31/2025 - 11:29am

When Congress eventually reopens, the 2026 National Defense Authorization Act (NDAA) will be moving toward a vote. This gives us a chance to see the priorities of the Secretary of Defense and his Congressional allies when it comes to the military—and one of those priorities is buying technology, especially AI, with less of an obligation to prove it’s effective and worth the money the government will be paying for it. 

As reported by Lawfare, “This year’s defense policy bill—the National Defense Authorization Act (NDAA)—would roll back data disclosures that help the department understand the real costs of what they are buying, and testing requirements that establish whether what contractors promise is technically feasible or even suited to its needs.” This change comes amid a push from the Secretary of Defense to “Maximize Lethality” by acquiring modern software “at a speed and scale for our Warfighter.” The Senate Armed Services Committee has also expressed interest in making “significant reforms to modernize the Pentagon's budgeting and acquisition operations...to improve efficiency, unleash innovation, and modernize the budget process.”

The 2026 NDAA itself says that the “Secretary of Defense shall prioritize alternative acquisition mechanisms to accelerate development and production” of technology, including an expedited “software acquisition pathway”—a special part of the U.S. code that, if this version of the NDAA passes, will transfer powers to the Secretary of Defense to streamline the buying process and make new technology or updates to existing technology and get it operational “in a period of not more than one year from the time the process is initiated…” It also makes sure the new technology “shall not be subjected to” some of the traditional levers of oversight

All of this signals one thing: speed over due diligence. In a commercial technology landscape where companies are repeatedly found to be overselling or even deceiving people about their product’s technical capabilities—or where police departments are constantly grappling with the reality that expensive technology may not be effective at providing the solutions they’re after—it’s important that the government agency with the most expansive budget has time to test the efficacy and cost-efficiency of new technology. It’s easy for the military or police departments to listen to a tech company’s marketing department and believe their well-rehearsed sales pitch, but Congress should make sure that public money is being used wisely and in a way that is consistent with both civil liberties and human rights. 

The military and those who support its preferred budget should think twice about cutting corners before buying and deploying new technology. The Department of Defense’s posturing does not elicit confidence that the technologically-focused military of tomorrow will be equipped in a way that is effective, efficient, or transparent. 

Age Verification, Estimation, Assurance, Oh My! A Guide to the Terminology

Thu, 10/30/2025 - 6:37pm

If you've been following the wave of age-gating laws sweeping across the country and the globe, you've probably noticed that lawmakers, tech companies, and advocates all seem to be using different terms for what sounds like the same thing. Age verification, age assurance, age estimation, age gating—they get thrown around interchangeably, but they technically mean different things. And those differences matter a lot when we're talking about your rights, your privacy, your data, and who gets to access information online.

So let's clear up the confusion. Here's your guide to the terminology that's shaping these laws, and why you should care about the distinctions.

Age Gating: “No Kids Allowed”

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. It simply refers to the fact that a restriction exists. Think of it as the concept of “you must be this old to enter” without getting into the details of how they’re checking. 

Age Assurance: The Umbrella Term

Think of age assurance as the catch-all category. It covers any method an online service uses to figure out how old you are with some level of confidence. That's intentionally vague, because age assurance includes everything from the most basic check-the-box systems to full-blown government ID scanning.

Age assurance is the big tent that contains all the other terms we're about to discuss below. When a company or lawmaker talks about "age assurance," they're not being specific about how they're determining your age—just that they're trying to. For decades, the internet operated on a “self-attestation” system where you checked a box saying you were 18, and that was it. These new age-verification laws are specifically designed to replace that system. When lawmakers say they want "robust age assurance," what they really mean is "we don't trust self-attestation anymore, so now you need to prove your age beyond just swearing to it."

Age Estimation: Letting the Algorithm Decide

Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you.

This might include:

  • Analyzing your face through a video selfie or photo
  • Examining your voice
  • Looking at your online behavior—what you watch, what you like, what you post
  • Checking your existing profile data

Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right?

Here's the problem, “estimation” is exactly that: it’s a guess. And it is inherently imprecise. Age estimation is notoriously unreliable, especially for teenagers—the exact group these laws claim to protect. An algorithm might tell a website you're somewhere between 15 and 19 years old. That's not helpful when the cutoff is 18, and what's at stake is a young person's constitutional rights.

And it gets worse. These systems consistently fail for certain groups:

When estimation fails (and it often does), users get kicked to the next level: actual verification. Which brings us to…

Age Verification: “Show Me Your Papers”

Age verification is the most invasive option. This is where you have to prove your age to a certain date, rather than, for example, prove that you have crossed some age threshold (like 18 or 21 or 65). EFF generally refers to most age gates and mandates on young people’s access to online information as “age verification,” as most of them typically require you to submit hard identifiers like:

  • Government-issued ID (driver's license, passport, state ID)
  • Credit card information
  • Utility bills or other documents
  • Biometric data

This is what a lot of new state laws are actually requiring, even when they use softer language like "age assurance." Age verification doesn't just confirm you're over 18, it reveals your full identity. Your name, address, date of birth, photo—everything.

Here's the critical thing to understand: age verification is really identity verification. You're not just proving you're old enough—you're proving exactly who you are. And that data has to be stored, transmitted, and protected by every website that collects it.

We already know how that story ends. Data breaches are inevitable. And when a database containing your government ID tied to your adult content browsing history gets hacked—and it will—the consequences can be devastating.

Why This Confusion Matters

Politicians and tech companies love using these terms interchangeably because it obscures what they're actually proposing. A law that requires "age assurance" sounds reasonable and moderate. But if that law defines age assurance as requiring government ID verification, it's not moderate at all—it's mass surveillance. Similarly, when Instagram says it's using "age estimation" to protect teens, that sounds privacy-friendly. But when their estimation fails and forces you to upload your driver's license instead, the privacy promise evaporates.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. 

Here's the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don't know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don't know that verification systems have error rates. They don't even seem to understand that the terms they're using mean different things. The fact that their terminology is all over the place—using "age assurance," "age verification," and "age estimation" interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.

Language matters because it shapes how we think about these systems. "Assurance" sounds gentle. "Verification" sounds official. "Estimation" sounds technical and impersonal, and also admits its inherent imprecision. But they all involve collecting your data and create a metaphysical age gate to the internet. The terminology is deliberately confusing, but the stakes are clear: it's your privacy, your data, and your ability to access the internet without constant identity checks. Don't let fuzzy language disguise what these systems really do.

❤️ Let's Sue the Government! | EFFector 37.15

Wed, 10/29/2025 - 1:06pm

There are no tricks in EFF's EFFector newsletter, just treats to keep you up-to-date on the latest in the fight for digital privacy and free expression. 

In our latest issue, we're explaining a new lawsuit to stop the U.S. government's viewpoint-based surveillance of online speech; sharing even more tips to protect your privacy; and celebrating a victory for transparency around AI police reports.

Prefer to listen in? Check out our audio companion, where EFF Staff Attorney Lisa Femia explains why EFF is suing to stop the Trump administration's ideological social media surveillance program. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.15 - ❤️ LET'S SUE THE GOVERNMENT!

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Science Must Decentralize

Fri, 10/24/2025 - 4:55pm

Knowledge production doesn’t happen in a vacuum. Every great scientific breakthrough is built on prior work, and an ongoing exchange with peers in the field. That’s why we need to address the threat of major publishers and platforms having an improper influence on how scientific knowledge is accessed—or outright suppressed.

In the digital age, the collaborative and often community-governed effort of scholarly research has gone global and unlocked unprecedented potential to improve our understanding and quality of life. That is, if we let it. Publishers continue to monopolize access to life-saving research and increase the burden on researchers through article processing charges and a pyramid of volunteer labor. This exploitation makes a mockery of open inquiry and the denial of access as a serious human rights issue.

While alternatives like Diamond Open Access are promising, crashing through publishing gatekeepers isn’t enough. Large intermediary platforms are capturing other aspects of the research process—inserting themselves between researchers and between the researchers and these published works—through platformization

Funneling scholars into a few major platforms isn’t just annoying, it’s corrosive to privacy and intellectual freedom. Enshittification has come for research infrastructure, turning everyday tools into avenues for surveillance. Most professors are now worried their research is being scrutinized by academic bossware, forcing them to worry about arbitrary metrics which don’t always reflect research quality. While playing this numbers game, a growing threat of surveillance in scholarly publishing gives these measures a menacing tilt, chilling the publication and access of targeted research areas. These risks spike in the midst of governmental campaigns to muzzle scientific knowledge, buttressed by a scourge of platform censorship on corporate social media.

The only antidote to this ‘platformization’ is Open Science and decentralization. Infrastructure we rely on must be built in the open and on interoperable standards, and hostile to corporate (or governmental) takeovers. Universities and the science community are well situated to lead this fight. As we’ve seen in EFF’s TOR University Challenge, promoting access to knowledge and public interest infrastructure is aligned with the core values of higher education. 

Using social media as an example, universities have a strong interest in promoting the work being done at their campuses far and wide. This is where traditional platforms fall short: algorithms typically prioritizing paid content, downrank off-site links, and prioritize sensational claims to drive engagement. When users are free from enshittification and can themselves control the  platform’s algorithms, as they can on platforms like Bluesky, scientists get more engagement and find interactions are more useful

Institutions play a pivotal role in encouraging the adoption of these alternatives, ranging from leveraging existing IT support to assist with account use and verification, all the way to shouldering some of the hosting with Mastodon instances and/or Bluesky PDS for official accounts. This support is good for the research, good for the university, and makes our systems of science more resilient to attacks on science and the instability of digital monocultures.

This subtle influence of intermediaries can also appear in other tools relied on by researchers, while there are a number of open alternatives and interoperable tools developed for everything from citation managementdata hosting to online chat among collaborators. Individual scholars and research teams can implement these tools today, but real change depends on institutions investing in tech that puts community before shareholders.

When infrastructure is too centralized, gatekeepers gain new powers to capture, enshittify, and censor. The result is a system that becomes less useful, less stable, and with more costs put on access. Science thrives on sharing and access equity, and its future depends on a global and democratic revolt against predatory centralized platforms.

EFF is proud to celebrate Open Access Week.

Joint Statement on the UN Cybercrime Convention: EFF and Global Partners Urge Governments Not to Sign

Fri, 10/24/2025 - 4:14pm

Today, EFF joined a coalition of civil society organizations in urging UN Member States not to sign the UN Convention Against Cybercrime. For those that move forward despite these warnings, we urge them to take immediate and concrete steps to limit the human rights harms this Convention will unleash. These harms are likely to be severe and will be extremely difficult to prevent in practice.

The Convention obligates states to establish broad electronic surveillance powers to investigate and cooperate on a wide range of crimes—including those unrelated to information and communication systems—without adequate human rights safeguards. It requires governments to collect, obtain, preserve, and share electronic evidence with foreign authorities for any “serious crime”—defined as an offense punishable under domestic law by at least four years’ imprisonment (or a higher penalty).

In many countries, merely speaking freely; expressing a nonconforming sexual orientation or gender identity; or protesting peacefully can constitute a serious criminal offense per the definition of the convention. People have faced lengthy prison terms, or even more severe acts like torture, for criticizing their governments on social media, raising a rainbow flag, or criticizing a monarch. 

In today’s digital era, nearly every message or call generates granular metadata—revealing who communicates with whom, when, and from where—that routinely traverses national borders through global networks. The UN cybercrime convention, as currently written, risks enabling states to leverage its expansive cross-border data-access and cooperation mechanisms to obtain such information for political surveillance—abusing the Convention’s mechanisms to monitor critics, pressure their families, and target marginalized communities abroad.

As abusive governments increasingly rely on questionable tactics to extend their reach beyond their borders—targeting dissidents, activists, and journalists worldwide—the UN Cybercrime Convention risks becoming a vehicle for globalizing repression, enabling an unprecedented multilateral infrastructure for digital surveillance that allows states to access and exchange data across borders in ways that make political monitoring and targeting difficult to detect or challenge.

EFF has long sounded the alarm over the UN Cybercrime Treaty’s sweeping powers of cross-border cooperation and its alarming lack of human-rights safeguards. As the Convention opens for signature on October 25–26, 2025 in Hanoi, Vietnam—a country repeatedly condemned by international rights groups for jailing critics and suppressing online speech—the stakes for global digital freedom have never been higher.

The Convention’s many flaws cannot easily be mitigated because it fundamentally lacks a mechanism for suspending states that systematically fail to respect human rights or the rule of law. States must refuse to sign or ratify the Convention. 

Read our full letter here.

When AI and Secure Chat Meet, Users Deserve Strong Controls Over How They Interact

Thu, 10/23/2025 - 1:23pm

Both Google and Apple are cramming new AI features into their phones and other devices, and neither company has offered clear ways to control which apps those AI systems can access. Recent issues around WhatsApp on both Android and iPhone demonstrate how these interactions can go sideways, risking revealing chat conversations beyond what you intend. Users deserve better controls and clearer documentation around what these AI features can access.

After diving into how Google Gemini and Apple Intelligence (and in some cases Siri) currently work, we didn’t always find clear answers to questions about how data is stored, who has access, and what it can be used for.

At a high level, when you compose a message with these tools, the companies can usually see the contents of those messages and receive at least a temporary copy of the text on their servers.

When receiving messages, things get trickier. When you use an AI like Gemini or a feature like Apple Intelligence to summarize or read notifications, we believe companies should be doing that content processing on-device. But poor documentation and weak guardrails create issues that have lead us deep into documentation rabbit holes and still fail to clarify the privacy practices as clearly as we’d like.

We’ll dig into the specifics below as well as potential solutions we’d like to see Apple, Google, and other device-makers implement, but first things first, here’s what you can do right now to control access:

Control AI Access to Secure Chat on Android and iOS

Here are some steps you can take to control access if you want nothing to do with the device-level AI features' integration and don’t want to risk accidentally sharing the text of a message outside of the app you’re using.

How to Check and Limit What Gemini Can Access

If you’re using Gemini on your Android phone, it’s a good time to review your settings to ensure things are set up how you want. Here’s how to check each of the relevant settings:

  • Disable Gemini App Activity: Gemini App Activity is a history Google stores of all your interactions with Gemini. It’s enabled by default. To disable it, open Gemini (depending on your phone model, you may or may not even have the Google Gemini app installed. If you don’t have it installed, you don’t really need to worry about any of this). Tap your profile picture > Gemini Apps Activity, then change the toggle to either “Turn off,” or “Turn off and delete activity” if you want to delete previous conversations. If the option reads “Turn on,” then Gemini Apps Activity is already turned off. 
  • Control app and notification access: You can control which apps Gemini can access by tapping your profile picture > Apps, then scrolling down and disabling the toggle next to any apps you do not want Gemini to access. If you do not want Gemini to potentially access the content that appears in notifications, open the Settings app and revoke notification access from the Google app.
  • Delete the Gemini app: Depending on your phone model, you might be able to delete the Gemini app and revert to using Google Assistant instead. You can do so by long-pressing the Gemini app and selecting the option to delete. 
How to Check and Limit what Apple Intelligence and Siri Can Access

Similarly, there are a few things you can do to clamp down on what Apple Intelligence and Siri can do: 

  • Disable the “Use with Siri Requests” option: If you want to continue using Siri, but don’t want to accidentally use it to send messages through secure messaging apps, like WhatsApp, then you can disable that feature by opening Settings > Apps > [app name], and disabling “Use with Siri Requests,” which turns off the ability to compose messages with Siri and send them through that app.
  • Disable Apple Intelligence entirely: Apple Intelligence is an all-or-nothing setting on iPhones, so if you want to avoid any potential issues your only option is to turn it off completely. To do so, open Settings > Apple Intelligence & Siri, and disable “Apple Intelligence” (you will only see this option if your device supports Apple Intelligence, if it doesn’t, the menu will only be for “Siri”). You can also disable certain features, like “writing tools,” using Screen Time restrictions. Siri can’t be universally turned off in the same way, though you can turn off the options under “Talk to Siri” to make it so you can’t speak to it. 

For more information about cutting off AI access at different levels in other apps, this Consumer Reports article covers other platforms and services.

Why It Matters  Sending Messages Has Different Privacy Concerns than Receiving Them

Let’s start with a look at how Google and Apple integrate their AI systems into message composition, using WhatsApp as an example.

Google Gemini and WhatsApp

On Android, you can optionally link WhatsApp and Gemini together so you can then initiate various actions for sending messages from the Gemini app, like “Call Mom on WhatsApp” or “Text Jason on WhatsApp that we need to cancel our secret meeting, but make it a haiku.” This feature raised red flags for users concerned about privacy.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products. So, unless you change it, when you use Gemini to compose and send a message in WhatsApp then the message you composed is visible to Google.

If you turn the activity off, interactions are still stored for 72 hours. Google’s documentation claims that even though messages are stored, those conversations aren't reviewed or used to improve Google machine learning technologies, though that appears to be an internal policy choice with no technical limits preventing Google from accessing those messages.

By default, everything you do in Gemini is stored in the “Gemini Apps Activity,” where messages are stored forever, subject to human review, and are used to train Google’s products.

The simplicity of invoking Gemini to compose and send a message may lead to a false sense of privacy. Notably, other secure messaging apps, like Signal, do not offer this Gemini integration.

For comparison’s sake, let’s see how this works with Apple devices.

Siri and WhatsApp

The closest comparison to this process on iOS is to use Siri, which it is claimed, will eventually be a part of Apple Intelligence. Currently, Apple’s AI message composition tools are not available for third-party apps like Signal and WhatsApp.

According to its privacy policy, when you dictate a message through Siri to send to WhatsApp (or anywhere else), the message, including metadata like the recipient phone number and other identifiers, is sent to Apple’s servers. This was confirmed by researchers to include the text of messages sent to WhatsApp. When you use Siri to compose a WhatsApp message, the message gets routed to both Apple and WhatsApp. Apple claims it does not store this transcript unless you’ve opted into “Improve Siri and Dictation.” WhatsApp defers to Apple’s support for data handling concerns. This is similar to how Google handles speech-to-text prompts.

In response to that research, Apple said this was expected behavior with an app that uses SiriKit—the extension that allows third-party apps to integrate with Siri—like WhatsApp does.

Both Siri and Apple Intelligence can sometimes run locally on-device, and other times need to rely on Apple-managed cloud servers to complete requests. Apple Intelligence can use the company’s Private Cloud Compute, but Siri doesn’t have a similar feature.

The ambiguity around where data goes makes it overly difficult to decide on whether you are comfortable with the sort of privacy trade-off that using features like Siri or Apple Intelligence might entail.

How Receiving Messages Works

Sending encrypted messages is just one half of the privacy puzzle. What happens on the receiving end matters too. 

Google Gemini

By default, the Gemini app doesn’t have access to the text inside secure messaging apps or to notifications. But you can grant access to notifications using the Utilities app. Utilities can read, summarize, and reply to notifications, including in WhatsApp and Signal (it can also read notifications in headphones).

This could open up any notifications routed through the Utilities app to the Gemini app to access internally or from third-parties.

We could not find anything in Google’s Utilities documentation that clarifies what information is collected, stored, or sent to Google from these notifications. When we reached out to Google, the company responded that it “builds technical data protections that safeguard user data, uses data responsibly, and provides users with tools to control their Gemini experience.” Which means Google has no technical limitation around accessing the text from notifications if you’ve enabled the feature in the Utilities app. This could open up any notifications routed through the Utilities app to the Gemini app to be accessed internally or from third-parties. Google needs to publicly make its data handling explicit in its documentation.

If you use encrypted communications apps and have granted access to notifications, then it is worth considering disabling that feature or controlling what’s visible in your notifications on an app-level.

Apple Intelligence

Apple is more clear about how it handles this sort of notification access.

Siri can read and reply to messages with the “Announce Notifications” feature. With this enabled, Siri can read notifications out loud on select headphones or via CarPlay. In a press release, Apple states, “When a user talks or types to Siri, their request is processed on device whenever possible. For example, when a user asks Siri to read unread messages… the processing is done on the user’s device. The contents of the messages aren’t transmitted to Apple servers, because that isn’t necessary to fulfill the request.”

Apple Intelligence can summarize notifications from any app that you’ve enabled notifications on. Apple is clear that these summaries are generated on your device, “when Apple Intelligence provides you with preview summaries of your emails, messages, and notifications, these summaries are generated by on-device models.” This means there should be no risk that the text of notifications from apps like WhatsApp or Signal get sent to Apple’s servers just to summarize them.

New AI Features Must Come With Strong User Controls

As more device-makers cram AI features into their devices, the more necessary it is for us to have clear and simple controls over what personal data these features can access on our devices. If users do not have control over when a text leaves a device for any sort of AI processing—whether that’s to a “private” cloud or not—it erodes our privacy and potentially threatens the foundations of end-to-end encrypted communications.

Per-app AI Permissions

Google, Apple, and other device makers should add a device-level AI permission, just like they do for other potentially invasive privacy features, like location sharing, to their phones. You should be able to tell the operating system’s AI to not access an app, even if that comes at the “cost” of missing out on some features. The setting should be straightforward and easy to understand in ways the Gemini an Apple Intelligence controls currently are not.

Offer On-Device-Only Modes

Device-makers should offer an “on-device only” mode for those interested in using some features without having to try to figure out what happens on device or on the cloud. Samsung offers this, and both Google and Apple would benefit from a similar option.

Improve Documentation

Both Google and Apple should improve their documentation about how these features interact with various apps. Apple doesn’t seem to clarify notification processing privacy anywhere outside of a press release, and we couldn’t find anything about Google’s Utilities privacy at all. We appreciate tools like Gemini Apps Activity as a way to audit what the company collects, but vague information like “Prompted a Communications query” is only useful if there’s an explanation somewhere about what that means.

The current user options are not enough. It’s clear that the AI features device-makers add come with significant confusion about their privacy implications, and it’s time to push back and demand better controls. The privacy problems introduced alongside new AI features should be taken seriously, and remedies should be offered to both users and developers who want real, transparent safeguards over how a company accesses their private data and communications.

Civil Disobedience of Copyright Keeps Science Going

Thu, 10/23/2025 - 12:17pm

Creating and sharing knowledge are defining traits of humankind, yet copyright law has grown so restrictive that it can require acts of civil disobedience to ensure that students and scholars have the books they need and to preserve swaths of culture from being lost forever.

Reputable research generally follows a familiar pattern: Scientific articles are written by scholars based on their research—often with public funding. Those articles are then peer-reviewed by other scholars in their fields and revisions are made according to those comments. Afterwards, most large publishers expect to be given the copyright on the article as a condition of packaging it up and selling it back to the institutions that employ the academics who did the research and to the public at large. Because research is valuable and because copyright is a monopoly on disseminating the articles in question, these publishers can charge exorbitant fees that place a strain even on wealthy universities and are simply out of reach for the general public or universities with limited budgets, such as those in the global south. The result is a global human rights problem.

This model is broken, yet science goes on thanks to widespread civil disobedience of the copyright regime that locks up the knowledge created by researchers. Some turn to social media to ask that a colleague with access share articles they need (despite copyright’s prohibitions on sharing). Certainly, at least some such sharing is protected fair use, but scholars should not have to seek a legal opinion or risk legal threats from publishers to share the collective knowledge they generate.

Even more useful, though on shakier legal ground, are so-called “shadow archives” and aggregators such as SciHub, Library Genesis (LibGen), Z-Library, or Anna’s Archive. These are the culmination of efforts from volunteers dedicated to defending science.

SciHub alone handles tens of millions of requests for scientific articles each year and remains operational despite adverse court rulings thanks both to being based in Russia, and to the community of academics who see it as an ethical response to the high access barriers that publishers impose and provide it their log-on credentials so it can retrieve requested articles. SciHub and LibGen are continuations of samizdat, the Soviet-era practice of disobeying state censorship in the interests of learning and free speech.

Unless publishing gatekeepers adopt drastically more equitable practices and become partners in disseminating knowledge, they will continue to lose ground to open access alternatives, legal or otherwise.

EFF is proud to celebrate Open Access Week.

EFF Backs Constitutional Challenge to Ecuador’s Intelligence Law That Undermines Human Rights

Thu, 10/23/2025 - 11:11am

In early September, EFF submitted an amicus brief to Ecuador’s Constitutional Court supporting a constitutional challenge filed by Ecuadorian NGOs, including INREDH and LaLibre. The case challenges the constitutionality of the Ley Orgánica de Inteligencia (LOI) and its implementing regulation, the General Regulation of the LOI.

EFF’s amicus brief argues that the LOI enables disproportionate surveillance and secrecy that undermine constitutional and Inter-American human rights standards. EFF urges the Constitutional Court to declare the LOI and its regulation unconstitutional in their entirety.

More specifically, our submission notes that:

“The LOI presents a structural flaw that undermines compliance with the principles of legality, legitimate purpose, suitability, necessity, and proportionality; it inverts the rule and the exception, with serious harm to rights enshrined constitutionally and under the Convention; and it prioritizes indeterminate state interests, in contravention of the ultimate aim of intelligence activities and state action, namely the protection of individuals, their rights, and freedoms.”

Core Legal Problems Identified

Vague and Overbroad Definitions

The LOI contains key terms like “national security,” “integral security of the State,” “threats,” and “risks” that are left either undefined or so broadly framed that they could mean almost anything. This vagueness grants intelligence agencies wide, unchecked discretion, and fails short of the standard of legal certainty required under the American Convention on Human Rights (CADH).

Secrecy and Lack of Transparency

The LOI makes secrecy the rule rather than the exception, reversing the Inter-American principle of maximum disclosure, which holds that access to information should be the norm and secrecy a narrowly justified exception. The law establishes a classification system—“restricted,” “secret,” and “top secret”—for intelligence and counterintelligence information, but without clear, verifiable parameters to guide its application on a case-by-case basis. As a result, all information produced by the governing body (ente rector) of the National Intelligence System is classified as secret by default. Moreover, intelligence budgets and spending are insulated from meaningful public oversight, concentrated under a single authority, and ultimately destroyed, leaving no mechanism for accountability.

Weak or Nonexistent Oversight Mechanisms

The LOI leaves intelligence agencies to regulate themselves, with almost no external scrutiny. Civilian oversight is minimal, limited to occasional, closed-door briefings before a parliamentary commission that lacks real access to information or decision making power. This structure offers no guarantee of independent or judicial supervision and instead fosters an environment where intelligence operations can proceed without transparency or accountability.

Intrusive Powers Without Judicial Authorization

The LOI allows access to communications, databases, and personal data without prior judicial order, which enables the mass surveillance of electronic communications, metadata, and databases across public and private entities—including telecommunication operators. This directly contradicts rulings of the Inter-American Court of Human Rights, which establish that any restriction of the right to privacy must be necessary, proportionate, and subject to independent oversight. It also runs counter to CAJAR vs. Colombia, which affirms that intrusive surveillance requires prior judicial authorization.

International Human Rights Standards Applied

Our amicus curiae draws on the CAJAR vs. Colombia judgment, which set strict standards for intelligence activities. Crucially, Ecuador’s LOI fall short of all these tests: it doesn’t constitute an adequate legal basis for limiting rights; contravenes necessary and proportionate principles; fails to ensure robust controls and safeguards, like prior judicial authorization and solid civilian oversight; and completely disregards related data protection guarantees and data subject’s rights.

At its core, the LOI structurally prioritizes vague notions of “state interest” over the protection of human rights and fundamental freedoms. It legalizes secrecy, unchecked surveillance, and the impunity of intelligence agencies. For these reasons, we urge Ecuador’s Constitutional Court to declare the LOI and its regulations unconstitutional, as they violate both the Ecuadorian Constitution and the American Convention on Human Rights (CADH).

Read our full amicus brief here to learn more about how Ecuador’s intelligence framework undermines privacy, transparency, and the human rights protected under Inter-American human rights law.

It’s Time to Take Back CTRL

Tue, 10/21/2025 - 1:02pm

Technology is supercharging the attack on democracy by making it easier to spy on people, block free speech, and control what we do. The Electronic Frontier Foundation’s activists, lawyers, and technologists are fighting back. Join the movement to Take Back CTRL.

DONATE TODAY

Join EFF and Fight Back

Take Back CTRL is EFF's new website to give you insight into the ways that technology has become the veins and arteries of rising global authoritarianism. It’s not just because of technology’s massive power to surveil, categorize, censor, and make decisions for governments—but also because the money made by selling your data props up companies and CEOs with clear authoritarian agendas. As the preeminent digital rights organization, EFF has a clear role to play.

If You Use Technology, This Fight Is Yours.

EFF was created for scary moments like the one we’re facing now. For 35 years, EFF has fought to ensure your rights follow you online and wherever you use technology. We’ve sued, we’ve analyzed, we’ve hacked, we’ve argued, and we’ve helped people be heard in halls of power.

But we're still missing something. You.

Because it's your rights we're fighting for:

  • Your right to speak and learn freely online, free of government censorship
  • Your right to move through the world without being surveilled everywhere you go
  • Your right to use your device without it tracking your every click, purchase, and IRL movement
  • Your right to control your data, including data about your body, and to know that data given to one government agency won’t be weaponized against you by another
  • Your right to do what you please with the products and content you pay for
  • Consider Take Back CTRL our "help wanted" notice, because we need your help to win this fight today.

Join EFF

The future is being decided today. Join the movement to Take Back CTRL.

The Take Back CTRL campaign highlights the work that EFF is doing to fight for our democracy, defend vulnerable members of our community, and stand up against the use of tech in this authoritarian takeover. It also features actions everyone can take to support EFF’s work, use our tools in their everyday lives, and fight back.

Help us spread the word:

Stop tech from dismantling democracy. Join the movement to Take Back CTRL of our rights. https://eff.org/tbc

No Tricks, Just Treats 🎃 EFF’s Halloween Signal Stickers Are Here!

Mon, 10/20/2025 - 4:37pm

EFF usually warns of new horrors threatening your rights online, but this Halloween we’ve summoned a few of our own we’d like to share.  Our new Signal Sticker Pack highlights some creatures—both mythical and terrifying—conjured up by our designers for you to share this spooky season.

If you’re new to Signal, it's a free and secure messaging app built by the nonprofit Signal Foundation at the forefront of defending user privacy. While chatting privately, you can add some seasonal flair with Signal Stickers, and rest assured: friends receiving them get the full sticker pack fully encrypted, safe from prying eyes and lurking spirits.

How To Get and Share Signal Stickers

On any mobile device or desktop with the Signal app installed, you can simply click the button below.

Download EFF's Signal Stickers

To share Frights and Rights  

You can also paste the sticker link directly into a signal chat, and then tap it to download the pack directly to the app.

Once they’re installed, they are even easier to share—simply open a chat, tap the sticker menu on your keyboard, and send one of EFF’s spooky stickers.  They’ll then be asked if they’d like to also have the sticker pack.

All of this works without any third parties knowing what sticker packs you have or whom you shared them with. Our little ghosts and ghouls are just between us.

Meet The Encryptids

These familiar champions of digital rights—The Encryptids—are back! Don’t let their monstrous looks fool you; each one advocates for privacy, security, and a dash of weirdness in their own way. Whether they’re shouting about online anonymity or the importance of interoperability, they’re ready to help you share your love for digital rights. Learn more about their stories here, and you can even grab a bigfoot pin to let everyone know that privacy is a “human” right.

Street-Level Surveillance Monsters

On a cool autumn night, you might be on the lookout for ghosts and ghouls from your favorite horror flicks—but in the real world, there are far scarier monsters lurking in the dark: police surveillance technologies. Often hidden in plain sight, these tools quietly watch from the shadows and are hard to spot. That’s why we’ve given these tools the hideous faces they deserve in our Street-Level Surveillance Monsters series, ready to scare (and inform) your loved ones.

Copyright Creatures

Ask any online creator and they’ll tell you: few things are scarier than a copyright takedown. From unfair DMCA claims and demonetization to frivolous lawsuits designed to intimidate people into a hefty payment, the creeping expansion of copyright can inspire as much dread as any monster on the big screen. That’s why this pack includes a few trolls and creeps straight from a broken copyright system—where profit haunts innovation. 

To that end, all of EFF’s work (including these stickers) are under an open CC-BY License, free for you to use and remix as you see fit.

Happy Haunting Everybody!

These frights may disappear with your message, but the fights persist. That’s why we’re so grateful to EFF supporters for helping us make the digital world a little more weird and a little less scary. You can become a member today and grab some gear to show your support. Happy Halloween!

DONATE TODAY

No One Should Be Forced to Conform to the Views of the State

Thu, 10/16/2025 - 3:05pm

Should you have to think twice before posting a protest flyer to your Instagram story? Or feel pressure to delete that bald JD Vance meme that you shared? Now imagine that you could get kicked out of the country—potentially losing your job or education—based on the Trump administration’s dislike of your views on social media. 

That threat to free expression and dissent is happening now, but we won’t let it stand. 

"...they're not just targeting individuals—they're targeting the very idea of freedom itself."

The Electronic Frontier Foundation and co-counsel are representing the United Automobile Workers (UAW), Communications Workers of America (CWA), and American Federation of Teachers (AFT) in a lawsuit against the U.S. State Department and Department of Homeland Security for their viewpoint-based surveillance and suppression of noncitizens’ First Amendment-protected speech online.  The lawsuit asks a federal court to stop the government’s unconstitutional surveillance program, which has silenced citizens and noncitizens alike. It has even hindered unions’ ability to associate with their members. 

"When they spy on, silence, and fire union members for speaking out, they're not just targeting individuals—they're targeting the very idea of freedom itself,” said UAW President Shawn Fain. 

The Trump administration has built this mass surveillance program to monitor the constitutionally protected online speech of noncitizens who are lawfully present in the U.S. The program uses AI and automated technologies to scour social media and other online platforms to identify and punish individuals who express viewpoints the government considers "hostile" to "our culture" and "our civilization".  But make no mistake: no one should be forced to conform to the views of the state. 

The Foundation of Democracy 

Your free expression and privacy are fundamental human rights, and democracy crumbles without them. We have an opportunity to fight back, but we need you.  EFF’s team of lawyers, activists, researchers, and technologists have been on a mission to protect your freedom online since 1990, and we’re just getting started.

Donate and become a member of EFF today. Your support helps protect crucial rights, online and off, for everyone.

Give Today

Labor Unions, EFF Sue Trump Administration to Stop Ideological Surveillance of Free Speech Online

Thu, 10/16/2025 - 2:54pm
Viewpoint-based Online Surveillance of Permanent Residents and Visa Holders Violates First Amendment, Lawsuit Argues

NEW YORK—The United Automobile Workers (UAW), Communications Workers of America (CWA), and American Federation of Teachers (AFT) filed a lawsuit today against the Departments of State and Homeland Security for their viewpoint-based surveillance and suppression of protected expression online. The complaint asks a federal court to stop this unconstitutional surveillance program, which has silenced and frightened both citizens and noncitizens, and hampered the ability of the unions to associate with their members and potential members. The case is titled UAW v. State Department.

Since taking power, the Trump administration has created a mass surveillance program to monitor constitutionally protected speech by noncitizens lawfully present in the U.S. Using AI and other automated technologies, the program surveils the social media accounts of visa holders with the goal of identifying and punishing those who express viewpoints the government doesn't like. This has been paired with a public intimidation campaign, silencing not just noncitizens with immigration status, but also the families, coworkers, and friends with whom their lives are integrated.

As detailed in the complaint, when asked in a survey if they had changed their social media activity as a result of the Trump administration's ideological online surveillance program, over 60 percent of responding UAW members and over 30 percent of responding CWA members who were aware of the program said they had. Among noncitizens, these numbers were even higher. Of respondents aware of the program, over 80 percent of UAW members who were not U.S. citizens and over 40 percent of CWA members who were not U.S. citizens said they had changed their activity online.

Individual union members reported refraining from posting, refraining from sharing union content, deleting posts, and deleting entire accounts in response to the ideological online surveillance program. Criticism of the Trump administration or its policies was the most common type of content respondents reported changing their social media activity around. Many members also reported altering their offline union activity in response to the program, including avoiding being publicly identified as part of the unions and reducing their participation in rallies and protests. One member even said they declined to report a wage theft claim due to fears arising from the surveillance program.

Represented by the Electronic Frontier Foundation (EFF), Muslim Advocates (MA), and the Media Freedom & Information Access Clinic (MFIA), the UAW, CWA, and AFT seek to halt the program that affects thousands of their members individually and has harmed the ability of the unions to organize, represent, and recruit members. The lawsuit argues that the viewpoint-based online surveillance program violates the First Amendment and the Administrative Procedure Act.

"The Trump administration's use of surveillance to track and intimidate UAW members is a direct assault on the First Amendment—and an attack on every working person in this country," said UAW President Shawn Fain. "When they spy on, silence, and fire union members for speaking out, they're not just targeting individuals—they're targeting the very idea of freedom itself. The right to protest, to organize, to speak without fear—that's the foundation of American democracy. If they can come for UAW members at our worksites, they can come for any one of us tomorrow. And we will not stand by and let that happen."

"Every worker should be alarmed by the Trump administration’s online surveillance program," said CWA President Claude Cummings Jr. "The labor movement is built on our freedoms under the First Amendment to speak and assemble without fear retaliation by the government. The unconstitutional Challenged Surveillance Program threatens those freedoms and explicitly targets those who are critical of the administration and its policies. This policy interferes with CWA members’ ability to express their points of view online and organize to improve their working conditions."

"Free speech is the foundation of democracy in America," said AFT President Randi Weingarten. "The Trump administration has rejected that core constitutional right and now says only speech it agrees with is permitted—and that it will silence those who disagree. This suit exposes the online surveillance tools and other cyber tactics never envisioned by the founders to enforce compliance with the administration’s views. It details the direct harms on both the target of these attacks and the chilling effect on all those we represent and teach."

"Using a variety of AI and automated tools, the government can now conduct viewpoint-based surveillance and analysis on a scale that was never possible with human review alone," said EFF Staff Attorney Lisa Femia. "The scale of this spying is matched by an equally massive chilling effect on free speech."

"The administration is hunting online for an ever-growing list of disfavored viewpoints," said Golnaz Fakhimi, Legal Director of Muslim Advocates. "Its goal is clear: consolidate authoritarian power by crushing dissent, starting with noncitizens, but certainly not ending there. This urgent lawsuit aims to put a stop to this power grab and defend First Amendment freedoms crucial to a pluralistic and democratic society."

"This case goes to the heart of the First Amendment," said Anthony Cosentino, a student in the Media Freedom & Information Access Clinic. "The government can’t go after people for saying things it doesn’t like. The current administration has ignored that principle, developing a vast surveillance apparatus to find and punish people for their constitutionally protected speech. It is an extraordinary abuse of power, creating a climate of fear not seen in this country since the McCarthy era, especially on college campuses. Our laws and Constitution will not allow it."

For the complaint: https://www.eff.org/document/uaw-v-dos-complaint

For more about the litigation: https://eff.org/cases/united-auto-workers-v-us-department-state

Contacts:
Electronic Frontier Foundation: press@eff.org
Muslim Advocates: golnaz@muslimadvocates.org

🎃 A Full Month of Privacy Tips from EFF | EFFector 37.14

Wed, 10/15/2025 - 2:59pm

Instead of catching you off-guard with a jump scare this Halloween season, EFF is here to catch you up on the latest digital rights news with our EFFector newsletter!

In this issue, we’re helping you take control of your online privacy with Opt Out October; explaining the UK’s attack on encryption and why it’s bad for all users; and covering shocking new details about an abortion surveillance case in Texas.

Prefer to listen in? Check out our audio companion, where EFF Security and Privacy Activist Thorin Klosowski explains how small steps to protect your privacy can add up to big changes.  Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.14 - 🎃 A FULL MONTH OF PRIVACY TIPS FROM EFF

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Victory! California Requires Transparency for AI Police Reports

Tue, 10/14/2025 - 1:44pm

California Governor Newsom has signed S.B. 524, a bill that begins the long process of regulating and imposing transparency on the growing problem of AI-written police reports. EFF supported this bill and has spent the last year vocally criticizing the companies pushing AI-generated police reports as a service. 

S.B.524 requires police to disclose, on the report, if it was used to fully or in part author a police report. Further, it bans vendors from selling or sharing the information a police agency provided to the AI. 

The bill is also significant because it required departments to retain all the various drafts of the report so that judges, defense attorneys, or auditors could readily see which portions of the final report were written by the officer and which portions were written by the computer. This creates major problems for police who use the most popular product in this space: Axon’s Draft One. By design, Draft One does not retain an edit log of who wrote what. Now, to stay in compliance with the law, police departments will either need Axon to change their product, or officers will have to take it upon themselves to go retain evidence of what each subsequent edit and draft of their report looked like. Or, police can drop Axon’s Draft One all together. 

EFF will continue to monitor whether departments are complying with this state law.

After Utah, California has become the second state to pass legislation that begins to address this problem. Because of the lack of transparency surrounding how police departments buy and deploy technology, it’s often hard to know if police departments are using AI to write reports, how the generative AI chooses to translate audio to a narrative, and which portions of reports are written by AI and which parts are written by the officers. EFF has written a guide to help you file public records requests that might shed light on your police department’s use of AI to write police reports. 

It’s still unclear if products like Draft One run afoul of record retention laws, and how AI-written police reports will impact the criminal justice system. We will need to consider more comprehensive regulation and perhaps even prohibition of this use of generative AI. But S.B. 524 is a good first step. We hope that more states will follow California and Utah’s lead and pass even stronger bills.

EFF and Five Human Rights Organizations Urge Action Around Microsoft’s Role in Israel’s War on Gaza

Mon, 10/13/2025 - 5:53pm

In a letter sent to Microsoft at the end of last month, EFF and five other civil society organizations—Access Now, Amnesty International, Human Rights Watch, Fight for the Future, and 7amleh—called on the company to cease any further involvement in providing AI and cloud computing technologies for use in Israel’s ongoing genocide against Palestinians in the Gaza Strip.

EFF also sent updated letters to Google and Amazon renewing our calls for each company to respond to the serious concerns we raised with each of them last year about how they are fulfilling their respective human rights promises to the public. Neither Google nor Amazon has responded substantively. Amazon failed to even acknowledge our request, much less provide any transparency to the public. 

Microsoft Takes a Positive Step Against Surveillance

On September 25, Microsoft’s Vice Chair & President reported that the company had “ceased and disabled a set of services” provided to a unit within the Israel Ministry of Defense. The announcement followed an internal review at the company after The Guardian reported on August 6 that the IDF is using Azure for the storage of data files of phone calls obtained through broad or mass surveillance of civilians in Gaza and the West Bank.

This investigation by The Guardian, +972 Magazine, and Local Call also revealed the extent to which Israel’s military intelligence unit in question, Unit 8200, has used Microsoft’s Azure cloud infrastructure and AI technologies to process intercepted communications and power AI-driven targeting systems against Palestinians in Gaza and the West Bank—potentially facilitating war crimes and acts of genocide.

Microsoft’s actions are a positive step, and we urge its competitors Google and Amazon to, at the very least, do the same, rather than continuing to support and facilitate mass surveillance of Palestinians in Gaza and the West Bank.  

The Next Steps

But this must be the starting point, and not the end. Our joint letter therefore calls on Microsoft to provide clarity around:

  1. What further steps Microsoft will take to suspend its business with the Israeli military and other government bodies where there is evidence indicating that business is contributing to grave human rights abuses and international crimes.
  2. Whether Microsoft will commit to publishing the review findings in full, including the scope of the investigation, the specific entities and services under review, and measures Microsoft will take to address adverse human rights impacts related to its business with the Israeli military and other government bodies.
  3. What steps Microsoft has taken to ensure that its current formal review thoroughly investigates the use of its technologies by the Israeli authorities, in light of the fact that the same law firm carried out the previous review and concluded that there was no evidence of use of Microsoft’s Azure and AI technologies to target or harm people in Gaza.
  4. Whether Microsoft will conduct an additional human rights review, or incorporate a human rights lens to the current review.
  5. Whether Microsoft has applied any limited access restrictions to its AI technologies used by the IDF and Israeli government to commit genocide and other international crimes. 
  6. Whether Microsoft will evaluate the “high-impact and higher-risk uses” of its evolving AI technology deployed in conflict zones.
  7. How Microsoft is planning to provide effective remedy, including reparations, to Palestinians affected by any contributions by the company to violations of human rights by Israel.

Microsoft’s announcement of an internal review and the suspension of some of its services is long overdue and much needed in addressing its potential complicity in human rights abuses. But it must not end here, and Microsoft should not be the only major technology company taking such action.  

EFF, Access Now, Amnesty International, Human Rights Watch, Fight for the Future, and 7amleh provided a deadline of October 10 for Microsoft to respond to the questions outlined in the letter. However, Microsoft is expected to send its written response by the end of the month, and we will publish the response once received.

Read the full letter to Microsoft here.

Watch Now: Navigating Surveillance with EFF Members

Fri, 10/10/2025 - 5:32pm

Online surveillance is everywhere—and understanding how you’re being tracked, and how to fight back, is more important than ever. That’s why EFF partnered with Women In Security and Privacy (WISP) for our annual Global Members’ Speakeasy, where we tackled online behavioral tracking and the massive data broker industry that profits from your personal information. 

Our live panel featured Rory Mir (EFF Associate Director of Community Organizing), Lena Cohen (EFF Staff Technologist), Mitch Stoltz (EFF IP Litigation Director) and Yael Grauer, Program Manager at Consumer Reports. Together, they unpacked how we arrived at a point where a handful of major tech companies dictate so much of our digital rights, how these monopolies erode privacy, and what real-world consequences come from constant data collection—and most importantly, what you can do to fight back. 

Members also joined in for a lively Q&A, exploring practical steps to opt out of some of this data collection, discussing the efficacy of privacy laws like the California Consumer Privacy Act (CCPA), and sharing tools and tactics to reclaim control over their data. 

We're always excited to find new ways to connect with our supporters and spotlight the critical work that their donations make possible. And because we want everyone to learn from these conversations, you can now watch the full conversation on YouTube or the Internet Archive

WATCH THE FULL DISCUSSION

EFF’s Global Member Speakeasy: You Are the Product 

Events like the annual Global Members’ Speakeasy are just one way we like to thank our members for powering EFF’s mission. When you become a member, you’re not only supporting our legal battles, research, and advocacy for digital freedom—you’re joining a global community of people who care deeply about defending privacy and free expression for everyone. 

Join EFF today, and you’ll receive invitations for future member events, quarterly insider updates on our most important work, and some conversation-starting EFF gear to help you spread the word about online freedom. 

A huge thank you to everyone who joined us and our partners at WISP for helping make this event happen. We’re already planning upcoming in-person and virtual events, and we can’t wait to see you there. 

EFF Austin: Organizing and Making a Difference in Central Texas

Fri, 10/10/2025 - 4:33pm

Austin, Texas is a major tech hub with a population that’s engaged in advocacy and paying attention. Since 1991, EFF-Austin an independent nonprofit civil liberties organization, has been the proverbial beacon alerting those in central Texas to the possibilities and implications of modern technology. It is also an active member of the Electronic Frontier Alliance (EFA). On a recent visit to Texas, I got the chance to speak with Kevin Welch, President of EFF-Austin, about the organization, its work, and what lies ahead for them:

How did EFF-Austin get started, and can you share how it got its name?

EFF-Austin is concerned with emerging frontiers where technology meets society. We are a group of visionary technologists, legal professionals, academics, political activists, and concerned citizens who work to protect digital rights and educate the public about emerging technologies and their implications. Similar to our namesake, the national Electronic Frontier Foundation (EFF), “the dominion we defend is the vast wealth of digital information, innovation, and technology that resides online.” EFF-Austin was originally formed in 1991 with the intention that it would become the first chapter of the national Electronic Frontier Foundation. However, EFF decided not to become a chapters organization, and EFF-Austin became a separately-incorporated, independent nonprofit organization focusing on cyber liberties, digital rights, and emerging technologies.

What's the mission of EFF-Austin and what do you promote?

EFF-Austin advocates for establishment and protection of digital rights and defense of the wealth of digital information, innovation, and technology. We promote the right of all citizens to communicate and share information without unreasonable constraint. We also advocate for the fundamental right to explore, tinker, create, and innovate along the frontier of emerging technologies.

EFF-Austin has been involved in a number of initiatives and causes over the past several years, including legislative advocacy. Can you share a few of them?

We were one of the earliest local organizations that began to call out the Austin City Council over their use of automated license plate readers (ALPRs). After several years of fighting, EFF-Austin was proud to join the No ALPRs coalition as a founding member with over thirty local and state activist groups. Through our efforts, Austin decided not to renew our ALPR pilot project, becoming one of the only cities in America to reject ALPRs. Building on this success, the coalition is broadening its scope to call out other uses of surveillance in Austin, like proposed contracts for park surveillance from Liveview Technologies, as well as data privacy abuses more generally, such as the potential partnership with Valkyrie AI to non-consensually provide citizen data for model training and research purposes without sufficient oversight or guardrails. In support of these initiatives, EFF-Austin also partnered with the Austin Technology Commission to propose much stricter oversight and transparency rules around how the city of Austin engages in contracts with third party technology vendors.

EFF-Austin has also provided expert testimony on a number of major technology bills at the Texas Legislature that have since become law, including the Texas Data Privacy And Security Act (TDPSA) and the Texas Responsible AI Governance Act (TRAIGA).

How can someone local to central Texas get involved?

We conduct monthly meetups with a variety of speakers, usually the second Tuesday of each month at 7:00pm at Capital Factory (701 Brazos St, Austin, TX 78701) in downtown Austin. These meetups can range from technology and legal explainers to digital security trainings, from digital arts profiles to shining a spotlight on surveillance. In addition, we have various one-off events, often in partnership with other local nonprofits and civic institutions, including our fellow EFA member Open Austin. We also have annual holiday parties and SXSW gatherings that are free and open to the public. We don't currently have memberships, so any and all are welcome.

While EFF-Austin events are popular and well-attended, and our impact on local technology policy is quite impressive for such a small nonprofit, we have no significant sustained funding beyond occasional outreach to our community. Any local nonprofits, activist organizations, academic initiatives, or technology companies who find themselves aligned with our cause and would like to fund our efforts are encouraged to reach out. We also always welcome the assistance of those who wish to volunteer their technical, organizational, or legal skills to our cause. In addition to emailing us at info@effaustin.org, follow us on Mastodon, Bluesky, Twitter, Facebook, Instagram, or Meetup, and visit us at our website at https://effaustin.org.

PERA Remains a Serious Threat to Efforts Against Bad Patents

Thu, 10/09/2025 - 4:03pm

As all things old are new again, a bill that would make obtaining bad patents easier and harder to challenge is being considered in the Senate Judiciary Committee. The Patent Eligibility Restoration Act (PERA) would reverse over a decade of progress in fighting patent trolls and making the patent system more balanced.

PERA would overturn long-standing court decisions that have helped keep some of the most problematic patents in check. This includes the Supreme Court’s Alice v. CLS Bank decision, which bars patents on abstract ideas. While Alice has not completely solved the problems of the patent system or patent trolling, it has led to the rejection of hundreds of low-quality software patents and, as a result, has allowed innovation and small businesses to grow.

Thanks to the Alice decision, courts have invalidated a rogue’s gallery of terrible software patents—such as patents on online photo contests, online bingo, upselling, matchmaking, and scavenger hunts. These patents didn’t describe real inventions—they merely applied old ideas to general-purpose computers. But PERA would wipe out the Alice framework and replace it with vague, hollow exceptions, taking us back to an era where patent trolls and large corporate patent-holders aggressively harassed software developers and small companies.

This bill, combined with recent changes that have restricted access to the Patent Trial and Appeal Board (PTAB), would create a perfect storm—giving patent trolls and major corporations with large patent portfolios free rein to squeeze out independent inventors and small businesses.

EFF is proud to join a letter, along with Engine, the Public Interest Patent Law Institute, Public Knowledge, and R Street, to the Senate Judiciary Committee opposing this poorly-timed and concerning bill. We urge the committee to instead focus on restoring the PTAB as the accessible, efficient check on patent quality that Congress intended.

EFF and Other Organizations: Keep Key Intelligence Positions Senate Confirmed

Wed, 10/08/2025 - 3:19pm

In a joint letter to the ranking members of the House and Senate intelligence committees, EFF has joined with 20 other organizations, including the ACLU, Brennan Center, CDT, Asian Americans Advancing Justice, and Demand Progress, to express opposition to a rule change that would seriously weaken accountability in the intelligence community. Specifically, under the proposed Senate Intelligence Authorization Act, S. 2342, the general counsels of the Central Intelligence Agency (CIA) and the Office of the Director of National Intelligence (ODNI) would no longer be subject to Senate confirmation.

You can read the entire letter here

In theory, having the most important legal thinkers at these secretive agencies—the ones who presumably tell an agency if something is legal or not—approved or rejected by the Senate allows elected officials the chance to vet candidates and their beliefs. If, for instance, a confirmation hearing had uncovered that a proposed general counsel for the CIA thinks it's not only legal, but morally justifiable for the agency to spy on US persons on US soil because of their political or religious beliefs–then the Senate would have the chance to reject that person. 

As the letter says, “The general counsels of the CIA and ODNI wield extraordinary influence, and they do so entirely in secret, shaping policies on surveillance, detention, interrogation, and other highly consequential national security matters. Moreover, they are the ones primarily responsible for determining the boundaries of what these agencies may lawfully do. The scope of this power and the fact that it occurs outside of public view is why Senate confirmation is so important.” 

It is for this reason that EFF and our ally organizations urge Congress to remove this provision from the Senate Intelligence Authorization Act.

How to File a Privacy Complaint in California

Tue, 10/07/2025 - 6:09pm

Privacy laws are only as strong as their enforcement. In California, the state’s privacy agency recently issued its largest-ever fine for violation of the state’s privacy law—and all because of a consumer complaint.

The state’s  privacy law, the California Consumer Privacy Act or CCPA, requires many companies to respect California customers' and job applicants' rights to know, delete and correct information that businesses collect about them, and to opt-out of some types of sharing and use. It also requires companies to give notice of these rights, along with other information, to customers, job applicants, and others. (Bonus tip: Have a complaint about something else, such as a data breach? Go to the CA Attorney General.)

If you’re a Californian and think a business isn’t obeying the law, then the best thing to do is tell someone who can do something about it. How? It’s easy. In fewer than a dozen questions, you can share enough information to get the agency started.

Start With the Basics

First, head to the California Privacy Protection Agency’s website at cppa.ca.gov. On the front page, you’ll see an option to “File a Complaint.” Click on that option.

That button takes you to the online complaint form. You can also print out the agency’s paper complaint form here.

The complaint form starts, fittingly, by explaining the agency’s own privacy practices. Then it gets down to business by asking for information about your situation.

The first question offers a list of rights people have under the CCPA, such as a right to delete or a right to correct sensitive personal information. So, for example, if you’ve asked ABC Company to delete your information, but they have refused, you’d select “Right to Delete.” This helps the agency categorize your complaint and tie it directly to the requirements in the law.  The form then asks for the names of businesses, contractors, or people you want to report.

It also asks whether you’re a California resident. If you’re unsure, because you split residency or for other reasons, there is an “Unsure” option.

Adding the Details

From there, the form asks for more detailed information about what’s happened. There is a character limit on this question, so you’ll have to choose your words carefully. If you can, check out the agency’s FAQ on how to write a successful complaint before you submit the form. This will help you be specific and tell the agency what they need to hear to act on your complaint.

In the next question, include information about any proof you have supporting your complaint. So, for example, you could tell the agency you have your email asking ABC Company to delete your information, and also a screenshot of proof that they haven’t erased it. Or, say “I spoke to a person on the phone on this date.” This should just be a list of information you have, rather than a place to paste in emails or attach images.

The form will also ask if you’ve directly contacted the business about your complaint. You can just answer yes or no to this question. If it’s an issue such as a company not posting a privacy notice, or something similar, it may not have made sense to contact them directly. But if you made a deletion request, you probably have contacted them about it.

Anonymous or Not?

Finally, the complaint form will ask you to make either an “unsworn complaint” or a “sworn complaint.” This choice affects how you’ll be involved in the process going forward. You can file an anonymous unsworn complaint. But that will mean the agency can’t contact you about the issue in the future, since they don’t have any of your information.

For a sworn complaint, you have to provide some contact information and confirm that what you’re saying is true and that you’d swear to it in court.

Just because you submit contact information, that doesn’t mean the agency will contact you. Investigations are usually confidential, until there’s something like a settlement to announce. But we’ve seen that consumer complaints can be the spark for an investigation. It’s important for all of us to speak up, because it really does make a difference.

Pages