EFF: Updates
👎 California's Terrible, No Good, Very Bad Social Media Ban | EFFector 38.9
We'd all like the internet to be a better place—for kids and adults alike. But in the name of online safety, governments around the world are racing to impose a dangerous new system of control. Are age gates the silver bullet to the internet's problems they're being promoted as? Or are we being sold a bill of goods? We're answering this question and more in our latest EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue covers an attack on VPNs in Utah, a livestream on how to disenshittify the internet, and California's proposed social media ban that could set a dangerous new precedent for online censorship.
Prefer to listen in? EFFector is now available on all major podcast platforms. This time, we're having a conversation with EFF Legislative Analyst Molly Buckley on why social media bans can't sidestep the U.S. constitution. You can find the episode and subscribe on your podcast platform of choice:
%3Ciframe%20height%3D%22200px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F07b61711-d8ff-4483-aee3-21daa5a3ea22%3Fdark%3Dfalse%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info.
This embed will serve content from simplecast.com
Want to help push back on these misguided regulations? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight for privacy and free speech online when you support EFF today!
The SECURE Data Act is Not a Serious Piece of Privacy Legislation
The federal SECURE Data Act is not a serious consumer privacy bill, and its provisions—if enacted—would be a retreat from already insufficient state protections.
Republicans on the House Energy and Commerce Committee released a draft of the bill late last month without bipartisan support. The bill is weaker than congressional proposals in prior years, as well as most of the 21 state consumer privacy laws already on the books.
The bill could wipe out hundreds of state privacy protections.
Most troubling for EFF: the bill would preempt dozens, if not hundreds, of state laws that regulate related topics, and it would not allow consumers to sue to protect their own rights (commonly called a private right of action). And it comes nowhere close to banning online behavioral advertising—a practice that fuels technology companies’ always increasing hunt for personal data.
The bill also suffers from many other flaws including weak opt-out defaults, inadequate data minimization requirements, and large definitional loopholes for companies.
Key ProvisionsThe bill would give consumers some rights to take action to control their personal data— like access, correction, deletion, and limited portability. These rights have become standard in all data privacy proposals in recent years.
The bill would also require companies to obtain your consent before processing your sensitive data, or using any of your personal data for a previously undisclosed purpose. Absent your consent, a company couldn’t do these things.
Further, the bill would allow you to opt out of (1) targeted third-party advertising, (2) the sale of your personal data, and (3) profiling of you that has a legal, healthcare, housing, or employment effect. Unfortunately, a company could keep doing these invasive things to you, unless you opted out.
The bill would also require data brokers that make at least 50 percent of their profits from the sale of personal data to register in a public database maintained by the Federal Trade Commission (FTC).
Preemption of Too Many State LawsFederal privacy laws should allow states to build ever stronger rights on top of the federal floor. Many federal privacy laws allow this, including the Health Insurance Portability and Accountability Act, the Video Privacy Protection Act, and the Electronic Communications Privacy Act.
The SECURE Data Act would not do that. Instead, it would wipe out dozens, if not hundreds, of existing state privacy protections. Section 15 of the bill would preempt any “law, rule, regulation, requirement, standard, or other provision [that] relates to the provisions of this Act.” This would kill the 21 state consumer privacy laws passed in the past few years. These state bills aren’t strong enough, but they are still better than this federal proposal. For example, California maintains a data broker deletion tool and requires companies to comply with automatic opt-out signals—including one that is built into EFF’s Privacy Badger.
Because the SECURE Data Act has provisions that relate to data privacy and security, it could preempt all 50 state data breach laws and many others. It could also preempt state laws related to specific pieces of sensitive data, like bans on the sale of biometric or location information. Some states like California have constitutional provisions that protect an individual’s right to privacy, which can be enforced against companies. That constitutional provision, as well as state privacy torts, could also be in danger if this bill passed.
No Private Enforcement, A New Cure Period, and Vague Security PowersStrong consumer privacy laws should allow consumers to take companies to court to defend their own rights. This is essential because regulators do not have the resources to catch every violation, and federal consumer enforcement agencies have been gutted during the current administration.
The SECURE Data Act does not have a private right of action. The FTC, along with state attorneys general, have primary enforcement authority. The law also gives companies 45 days to “cure” any violation with no penalty after they are caught.
Moreover, Section 8 of the bill creates a vaguely defined self-regulatory scheme in which companies can apply to be audited by an “independent organization” that will apply a “code of conduct.” Following this code of conduct would give companies a presumption that they are complying with the law. This provision is an implicit acknowledgement that the bill does not provide regulators with any new resources to enforce new protections.
Section 9 of the bill would give the Secretary of Commerce broad power to “take any action necessary and appropriate to support the international flow of personal data,” including assessing “security interests of the United States.” The scope of this amorphous provision is unclear, but it likely does not belong in a consumer protection bill.
Weak Privacy DefaultsYour online privacy should not depend on whether you have the time, patience, and knowledge to navigate a website and turn off invasive tracking. Good privacy laws build in data minimization requirements—meaning there should be a default standard that prevents companies from processing your data for purposes that are not needed to provide you with the service you asked for.
The SECURE Data Act puts the burden on you to opt out of invasive company practices, like targeted third-party advertising, the sale of your personal data, and profiling. The bill at least requires companies to obtain your consent before processing your sensitive data (like selling your precise location). These consent requirements, however, are often an invitation for companies to trick you into clicking a button to give away your rights in hard-to-read policies. Indeed, few people would knowingly agree to let a company sell their personal data to a broker who turns around and sells it to the government.
Section 3 of the bill uses the term “data minimization,” but it is done in name only. The provision does not limit a company’s processing of data to only what is necessary to provide the customer with the good or service they asked for. Instead, the provision limits processing of data to only what a company “disclosed to the customer”—meaning if it is in the confusing privacy policy that nobody reads, it is okay.
And the bill would not even allow you to restrict certain uses of your data. As companies seek more data for AI systems, many internet users do not want their private personal data to be used to train those models. However, the bill makes clear that “nothing in this Act may be construed to restrict” a company from collecting, using, or retaining your data to “develop” or “improve” a new technology.
Other Flawed Definitions and LoopholesThe bill has numerous loopholes that technology companies would exploit if the bill were to become law. Below is just a sampling:
- Government contractors: Under Section 13(b)(2), government contractors are exempt from the bill, which could be wrongly interpreted to exempt certain data brokers from sale restrictions when those sales are made to the government. This type of exemption could benefit surveillance companies like Clearview AI, which previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is likely not the authors’ intention, since the definition of sale includes those made “to a government entity.”
Sale definition: The definition in Section 16(28) is defined too narrowly. A sale should mean any exchange for monetary “or other valuable” consideration, as in some other privacy laws. - Biometric information definition: The definition in Section 16(4) excludes data generated from a photo or video, and the definition excludes face scans not meant to “identify a specific individual.” This could be wrongly interpreted to allow biometric identification from security camera footage, or biometric use for sentiment or demographic analysis.
- Personal data definition: The definition in Section 16(21) exempts “de-identified data” from the definition of personal data, which could allow companies to do anything with de-identified data because that data is not protected by the law. The problem with de-identified data is that many times it is not.
- Deletion requests: With regard to data that a company obtained from a third-party, Section 2(d)(5) would treat a consumer’s deletion request merely as an opt-out request. And even if a customer requested deletion, a company might be able to retain the data for research purposes under section 11(a)(9)(A).
- Profiling definition: Under the definition in Section 16(25), companies could profile so long as the profiling is not “solely automated.” The flimsiest human review would exempt highly automated profiling.
Congress is long overdue to enact a strong comprehensive consumer data privacy law, and we have sketched what it should look like. But the SECURE Data Act is woefully inadequate. In fact, it would cause even more corporate surveillance of our personal information, by wiping out state laws that are more protective than this federal bill. Even worse, this bill would block state legislatures from protecting their residents from the privacy threats of tomorrow that are unforeseeable today.
EFF and 18 Organizations Urge UK Policymakers to Prioritize Addressing the Roots of Online Harm
EFF joins 18 organizations in writing a letter to UK policymakers urging them to address the root causes of online harm—rather than undermining the open web through blunt restrictions.
The coalition, which includes Mozilla, Tor Project, and Open Rights Group, warns that proposed measures following the passage of the Children’s Wellbeing and Schools Bill risk fundamentally reshaping the internet in harmful ways. Chief among these proposals are sweeping age-gating requirements and access restrictions that would apply not only to young people, but effectively to all users.
While framed as efforts to protect children online, these policies rely heavily on age assurance technologies that are either inaccurate, privacy-invasive, or both. As the letter notes, mandating such systems across a wide range of services—from social media and video games to VPNs and even basic websites—would force users to verify their identity simply to access the web. This creates serious risks, including expanded surveillance, data breaches, and the erosion of anonymity.
Beyond privacy concerns, the signatories argue that these measures threaten the core architecture of the open internet. Age-gating at scale could fragment the web into a patchwork of restricted jurisdictions, limit access to information, and entrench the dominance of powerful gatekeepers like app stores and platform ecosystems. In doing so, policymakers risk weakening the very qualities—interoperability, accessibility, and openness—that have made the internet a global public resource.
The letter also emphasizes what’s missing from the current policy approach: meaningful efforts to address the underlying drivers of online harm. Many digital platforms are designed to maximize engagement and profit through pervasive data collection and targeted advertising, often at the expense of user safety and autonomy. Rather than imposing access bans, the coalition calls on UK policymakers to hold companies accountable for these systemic practices and to prioritize user rights by design.
Importantly, the signatories highlight that the internet remains a vital space for young people: offering access to information, support networks, and opportunities for expression that may not exist offline. Policies that restrict access risk cutting off these lifelines without meaningfully reducing harm.
The message is clear: protecting users online requires more than heavy-handed restrictions. It demands thoughtful, rights-respecting policies that tackle the business models and design choices driving harm, while preserving the open, global nature of the web.
Shut Down Turnkey Totalitarianism
William Binney, the NSA surveillance architect-turned-whistleblower, called it the "turnkey totalitarian state." Whoever sits in power gains access to a boundless surveillance empire that scorns privacy and crushes dissent. Politicians will come and go, but you can help us claw the tools of oppression out of government hands.
Become a Monthly Sustaining Donor
We must stand strong to uphold your privacy and free expression as democratic principles. With members around the world, EFF is empowered to use its trusted voice and formidable advocacy to protect your rights online. Whether giving monthly or one-time donations, members have helped EFF:
-
Sue to stop warrantless searches of Automated License Plate Reader (ALPR) records, which reveal millions of drivers’ private habits, movements, and associations.
-
Launch Rayhunter, an open source tool that empowers you to help search out cell-site simulators capable of tracking the movements of protestors, journalists, and more.
-
Help journalists see through the spin of "copaganda" by breaking down how policing technology companies often market their tools with misleading claims with our Selling Safety report.
Right now, U.S. Congress is on the edge of renewing the international mass spying program known as Section 702, affecting millions. EFF is rallying to cut through the politics and give ordinary people a chance to stop this oppressive surveillance. It’s only possible with help from supporters like you, so join EFF today.
The New EFF Member GearGet this year’s new member t-shirt when you join EFF. Aptly titled "Claw Back," the design features an orange boy swatting at the street-level surveillance equipment multiplying in our communities. You might empathize with him, but there’s a better way. Let’s end the law enforcement contracts, harmful practices, and twisted logic that enable mass spying in the first place.
You can also get brand new set of eleven soft and supple polyglot puffy stickers as a token of thanks. Whether you're a kid or a kid at heart, these nostalgic stickers are perfect for digital devices, lunchboxes, and notebooks alike. Our little Ghostie protects privacy in six languages: Arabic, English, Japanese, Persian, Russian, and Spanish.
And for a limited time, get a Privacy Badger Crewneck sweater to help you browse the web with confidence. The embroidered Privacy Badger mascot appears above characters that say "privacy” because human rights are universal. Millions of people around the world use Privacy Badger, EFF's free tool that devours devious scripts and cookies that twist your web browsing into a commodity for Big Tech, advertisers, and scammers.
Privacy is a human right because it gives you a fundamental measure of security and freedom. We owe it to ourselves to fight the mass surveillance used to control and intimidate people. Let’s do this. Join EFF today with a monthly donation or one-time donation and help claw back your privacy.
____________________
EFF is a member-supported U.S. 501(c)(3) organization. We've received top ratings from the nonprofit watchdog Charity Navigator since 2013! Your donation is tax-deductible as allowed by law.
EFF Submission to UK Consultation on Digital ID
Last September, the United Kingdom’s Prime Minister Keir Starmer announced plans to introduce a new digital ID scheme in the country. The scheme aims to make it easier for people to prove their identities by creating a virtual ID on personal devices with information like names, date of birth, nationality or residency status, and a photo to verify their right to live and work in the country.
Since then, EFF has joined UK-based civil society organizations in urging the government to reconsider this proposal. In one joint letter from December, ahead of Parliament’s debate around a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s proposal.
Nevertheless, politicians have continued to explore ways to build out a digital ID system in the country, often fluctuating between different ideas and conceptualisations for such a scheme. In their search for clarity, the government launched a consultation, ‘Making public services work for you with your digital identity,’ seeking views on a proposed national digital ID system in the UK.
EFF submitted comments to this consultation, focusing on six interconnected issues:
- Mission creep
- Infringements on privacy rights
- Serious security risks
- Reliance on inaccurate and unproven technologies
- Discrimination and exclusion
- The deepening of entrenched power imbalances between the state and the public.
Even the strongest recommended safeguards cannot resolve these issues, and the fundamental core problem that a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences.
No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. It is essential that the UK government listen to people in the country and say no to digital ID.
Read our submission in full here.
Getting Digital Fairness Right: EFF's Recommendations for the EU's Digital Fairness Act
The next few years will be decisive for EU digital policymaking. With major laws like the Digital Services Act, the Digital Markets Act, and the AI Act now in place, the EU is entering an enforcement era that will show whether these rules are rights-respecting or drift toward overreach and corporate control. With the proposed EU’s Digital Fairness Act (DFA), the Commission is now turning to increasingly visible risks for users, such as dark patterns and exploitative personalization. Its “Digital Fairness Fitness Check” makes clear that existing consumer rules need updating to reflect how digital markets operate today.
But not all proposed solutions point in the right direction. Regulators are already flirting with measures that rely on expanded surveillance, such as age verification mandates—surface-level fixes that risk undermining fundamental rights while offering little more than a false sense of protection.
For EFF, digital fairness means addressing the root causes of harm, not requiring platforms to exert more control over their users. It means safeguarding privacy, freedom of expression, and the rights of users and developers.
If the DFA is to make a real difference, it must tackle structural imbalances. Lawmakers should focus on two interlocking principles. First, prioritize privacy. Reforms should address harms driven by surveillance-based business models, alongside deceptive design practices that impair informed choices. Second, strengthen user sovereignty, which is also a necessary precondition for European digital sovereignty more broadly. Strengthening user sovereignty means taking measures that address user lock-in, coercive contract terms, and manipulative defaults that limit users’ ability to freely choose how they use digital products and services.
Together, these principles would support the EU’s objectives of consistent consumer protection, fair markets, and a more coherent legal framework. If implemented properly, the EU could address power imbalances and build trust in Europe’s digital economy.
Ban Dark PatternsDark patterns are practices that impair users’ ability to make informed and autonomous decisions. Many companies deploy these tactics through interface design to steer choices and influence behavior. Their impact goes beyond poor consumer decisions. Dark patterns push users to share personal data they would not otherwise disclose and undermine autonomy by making alternatives harder to access.
The DFA should address this by clearly prohibiting misleading interfaces that distort user choice in commercial contexts. While the Digital Services Act introduced a definition, it only partially bans such practices and leaves gaps across existing consumer law rules. The DFA should close these gaps by, at the very least, introducing explicit prohibitions and clearer enforcement rules, without resorting to design mandates.
Tackle Commercial SurveillanceAt the core of digital unfairness lies the pervasive collection and use of personal data. Surveillance and profiling drive many of the harms regulators are trying to address, from dark patterns to exploitative personalization. The DFA should tackle these incentives directly by reducing reliance on surveillance-based business models. These practices are fundamentally incompatible with privacy and fairness, and they distort digital markets by rewarding data exploitation rather than quality of service. At a minimum, the DFA should address unfair profiling and surveillance advertising by strengthening privacy rights and banning pay-for-privacy schemes. Users should not have to trade their data or pay extra to avoid being tracked. Accordingly, the DFA should support the recognition of automated privacy signals by web browsers and mobile operating systems, which give users a better way to reject tracking and exercise their rights. Practices that override such signals through banners or interface design should be considered unfair.
Addressing surveillance and profiling also protects children, since many online harms are tied to the collection and exploitation of their data. Systems that serve ads or curate content often rely on intrusive profiling practices, raising concerns about privacy and fairness, particularly when applied to minors. Rather than turning to invasive age verification, the focus should be on limiting data use by default.
Strengthen User SovereigntyThere is a major gap in how EU law addresses user autonomy in digital markets: Many digital products and services still restrict what people can do with what they pay for through opaque or one-sided licensing terms, technical protection measures, and remote controls. These mechanisms increasingly limit lawful use, modification, or access after purchase, allowing providers to revoke access, disable functionalities, or degrade performance over time. In practice, this turns ownership into a conditional rental.
Consumers must be able to use and resell digital goods without hidden limitations and with clear licensing terms. Too often, technical and contractual lock-ins, including remote lockouts and unilateral restrictions on functionality, erode that control. Recent legal reforms show that progress is possible. Rules such as those under the Digital Markets Act have begun to curb technical and contractual barriers and promote user choice. However, many restrictions persist.
The DFA must address these practices by targeting unfair post-sale restrictions and strengthening users’ ability to control and switch services. This means setting clear limits on unfair terms and misleading practices, alongside robust transparency on how digital services function over time. It should also strengthen interoperability and support user control, allowing people to access third-party applications and to let trusted applications act on their behalf, reducing lock-in and expanding meaningful choice in how users interact with digital services.
A Bridge to Somewhere: How to Link Your Mastodon, Bluesky, or Other Federated Accounts
One of the central promises of open social media services is interoperability—the idea that wherever you personally decide to post doesn’t require others to be there just to follow what you have to say. Think of it like a radio broadcast: you want to reach people and don't care where they are or what device they're using. For example, in theory, a Bluesky user can follow someone on Mastodon or Threads without having to create a Mastodon or Threads account. But these systems are still a work in progress, and you might need to tweak a few things to get it working correctly.
Right now, broadcasting your message across social platforms can be a funky experience at best, deliberately broken up by oligopolists. The idea of the open web was baked into the internet via protocols like HTML and RSS that made it easy for anyone to visit a website or follow most blogs. The fact social media isn’t similarly open reflects an intentional choice to privatize the internet.
Bridging and managing your posts so they’re viewable outside a singular source is part of the broader philosophy of POSSE, short for Post Own Site Syndicate Elsewhere (sometimes its Post Own Site, Share Everywhere). Instead of managing several accounts across different services, you post once to one primary site (which might be your personal website, or just one social media account), then set it up so it automatically publishes everywhere else. This way, it doesn’t matter where you or your audience is, and they're not walled off by account registration requirements.
We’ll come back around to POSSE at the end of this post, but for now, let’s assume you just want your current main open social media account to actually have a chance to reach the most people it can.
Why Post to the Open Social WebBecause the Fediverse and ATmosphere use different protocols, we need to use a third-party tool so accounts can communicate with each other. For that, we’ll need a bridge. As the name suggests, a bridge can connect one social media account to another, so you can post once and spread your message across several places. This isn’t just some niche concept: major blogging platforms like Wordpress and Ghost integrate posting to the Fediverse.
Bridging is an important facet of POSSE, but also something more people should consider, even if they don’t run their own websites. For example, if you don’t want to create a Threads account just to interact with your one friend who uses that platform, you shouldn’t have to. The good news is, you don’t. There are several bridging services, like Fedisky, RSS Parrot, and pinhole, but Bridgy Fed is currently the simplest to use, so we’ll focus on that.
How to Post to Bluesky from MastodonFrom your Mastodon account (or other Fediverse account, for simplicity’s sake we’ll stick to Mastodon throughout), search for the username @bsky.brid.gy@bsky.brid.gy and follow that account. Once you do, the account will follow you back and you’ll be bridged and people can find you from their Bluesky account. You should also get a DM with your bridged username. If you don’t see the @bsky.brid.gy@bsky.brid.gy user when you search, your Mastodon instance may be blocking the bridging tool.
Threads users who have enabled Fediverse sharing will be able to find you with your standard Mastodon username (ie, @your_user_name@mastodon.social), but if they haven’t enabled sharing, they will not be able to see your account. While this search is still a beta feature, you might find it easier to share the full URL, which would look like this: https://www.threads.net/fediverse_profile/@your_user_name@mastodon.social
People on Bluesky can find you by: Either searching for your Mastodon username, or if that doesn’t work, @your_user_name.instance.ap.brid.gy. For example, if your username is @eff@mastodon.social, it would appear as @eff.mastodon.social.ap.brid.gy.
An example of a Mastodon username from the Bluesky web client.
How to Post to Mastodon and Bluesky from ThreadsYes, Threads is technically on the Fediverse, and you can bridge your Threads account to Mastodon or Bluesky (unless you’re in Europe, where the feature is disabled), but it’s a different process than on Bluesky and Mastodon.
- Open Settings > Account > Fediverse Sharing and set the option to “On.” This will make your posts visible to Mastodon (or other Fediverse) users, and vice versa.
- Once the Fediverse sharing is enabled, you’ll likely need to wait a week, then you can bridge to Bluesky. Search for and follow the @bsky.brid.gy@bsky.brid.gy account (it may take some digging to find it, but if that doesn’t work you can try visiting the profile page directly.
People on Mastodon (or other Fediverse accounts) and Bluesky can find you by: Mastodon users can find you at, @your_threads_username@threads.net while Bluesky users will find you at, @your_threads_username.threads.net.ap.brid.gy (seriously, that will be the username). Note that some Mastodon instances may block Threads users entirely.
An example of a Threads username from the Mastodon web client.
An example of a Threads username from the Bluesky web client.
How to Post to Mastodon and Threads from BlueskyFrom your Bluesky (or other ATProto) account, search for the username, “@ap.brid.gy” and follow that account. Once you do, the account will follow you back and you’ll be bridged, so people can follow you from Mastodon or other Fediverse accounts. You should also get a DM with your bridged username.
People on Mastodon (or other Fediverse account) and Threads can find you by: Your username will appear as @your_bluesky_username@bsky.brid.gy. For example, if your Bluesky username is @eff@bsky.social, it would appear as @eff.bksy.social@bsky.brid.gy.
An example of a Bluesky username from the Mastodon web client.
How to Post Everywhere from Your Own WebsiteYou can bridge more than social media accounts. If you have your own website, you can bridge that too (as long as it supports microformats and webmention, or an Atom or RSS feed. If you have a blog, there’s a good chance you’re already good to go). When you do so, the bridged account will either post the full text (or image) of whatever you post to your personal site, or a link to that content, depending on how your website is set up. You’ll also probably want to log into your Bridgy user page so you can manage the account.
Where people can find your bridged account: Usually, a user can just search for your website’s URL on their decentralized social network of choice, or enter it on the Bridgy Fed page. But if that doesn’t work, they can try @yourdomain.com@web.brid.gy from Mastodon or @yourdomain.com.web.brid.gy from Bluesky.
An example of a bridged website username in the Mastodon web client.
How Your Account Username Looks on Each PlatformYou’re Bound to Run Into Some Quirks- Sometimes messages take a little while to crossover between networks, and sometimes they don't crossover at all.
- You can’t log into a bridged account like a regular account, but Bridgy Fed does provide some tools to see incoming notifications and recent activity in case they’re not coming through properly.
- ActivityPub and ATProto don’t have the same feature set, so you will have certain capabilities for one account you might not have in another. For example, you can edit posts on Mastodon, but not on Bluesky. If you edit a post that’s bridged from Mastodon to Bluesky, the Bluesky post will not be updated.
- Replies can sometimes get lost, especially if the person (or people) replying to you doesn’t have sharing turned on.
- Ownership of accounts can get weird. For example, if you post to your own website and use a tool like Wordpress or Ghost for federation (more info below), you don’t necessarily get access to a “normal” social media account, with a standard login and password.
- And more! This is still a work in progress that has some technical quirks, but it’s improving all the time, and it’s best to keep telling yourself that troubleshooting is part of the fun.
As mentioned up top, there’s a lot more you can do, and an increasing number of tools are making this process simpler. Bridgy Fed is one way to post to more places from a single account, but it’s far from the only way to do so. Here are just a few examples.
- Micro.blog is a paid service where you can blog from your own domain name, then post automatically to Mastodon, Bluesky, Threads, Tumblr, Nostr, LinkedIn, Medium, Pixelfed, and Flickr.
- Ghost is a blogging and newsletter platform that offers direct integration with the Fediverse, as well as support for Bluesky. Wordpress offers the option to join the Fediverse through a community plugin. Other newsletter platforms, like Buttondown, also have plans for federation.
- Surf.social is a landing page and social media utility where you can show off all your various accounts (Federated or not). From the reader point of view, you can follow one publications numerous types of posts in one place. For example, 404 Media’s Surf.social feed includes its YouTube feed, podcast feed, and its journalist’s social media posts.
- If you think these new handles are a bit ugly, you can use a custom domain for Bluesky or fediverse account from your website.
Of course, there are plenty of other tools, blogging platforms, and other utilities out there to help facilitate posting and bridging accounts, with new ones coming along every day.
With proper support, time, and effort, eventually we will all be able to seamlessly interact across platforms, take our follows and followers to other services when a platform no longer suits our needs, and interact with a variety of web content regardless of what platform hosts it. Until then, we still need to do some DIY work, support the services we want to succeed, and push for more platforms and services to support federated protocols.
Utah’s New Law Targeting VPNs Goes Into Effect Next Week
For the last couple of years, we’ve watched the same predictable cycle play out across the globe: a state (or country) passes a clunky age-verification mandate, and, without fail, Virtual Private Network (VPN) usage surges as residents scramble to maintain their privacy and anonymity. We've seen this everywhere—from states like Florida, Missouri, Texas, and Utah, to countries like the United Kingdom, Australia, and Indonesia.
Instead of realizing that mass surveillance and age gates aren't exactly crowd favorites, Utah lawmakers have decided that VPNs themselves are the real issue.
Next week, on May 6, 2026, Utah will become, to EFF’s knowledge, the first state in the nation to target the use of VPNs to avoid legally mandated age-verification gates. While advocates in states like Wisconsin successfully forced the removal of similar provisions due to constitutional and technical concerns, Utah is proceeding with a mandate that threatens to significantly undermine digital privacy rights.
What the Bill DoesFormally known as the “Online Age Verification Amendments,” Senate Bill 73 (SB 73) was signed by Governor Spencer Cox on March 19, 2026. While the majority of the bill consists of provisions related to a 2% tax on revenues from online adult content that is set to take effect in October, one of the more immediate concerns for EFF is the section regulating VPN access, which goes into effect this coming Wednesday.
The VPN ProvisionsThe new law explicitly addresses VPN use in Section 14, which amends Section 78B-3-1002 of existing Utah statutes in two primary ways:
- Regulation based on physical location: Under the law, an individual is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN, proxy server, or other means to disguise their geographic location.
- Ban on sharing VPN instructions: Commercial entities that host "a substantial portion of material harmful to minors" are now prohibited from facilitating or encouraging the use of a VPN to bypass age checks. This includes providing instructions on how to use a VPN or providing the means to circumvent geofencing.
By holding companies liable for verifying the age of anyone physically in Utah, even those using a VPN, the law creates a massive "liability trap." Just like we argued in the case of the Wisconsin bill, if a website cannot reliably detect a VPN user's true location and the law requires it to do so for all users in a particular state, then the legal risk could push the site to either ban all known VPN IPs, or to mandate age verification for every visitor globally. This would subject millions of users to invasive identity checks or blocks to their VPN use, regardless of where they actually live.
"Don't Ask, Don't Tell"In practice, SB 73 is different from the Wisconsin proposal in that it stops short of a total VPN ban. Instead, it discourages using VPNs by imposing the liability described above and by muzzling the websites themselves from sharing information about VPNs. This raises significant First Amendment concerns, as it prevents platforms from providing basic, truthful information about a lawful privacy tool to their users.
Unlike previous drafts seen in other states, SB 73 doesn't explicitly ban the use of a VPN. Under a "don't ask, don't tell" style of enforcement, websites likely only have an obligation to ask for proof of age if they actually learn that a user is physically in Utah and using a VPN. If a site doesn’t know a user is in Utah, their broader obligation to police VPNs remains murky. So, while SB 73 isn’t as extreme as the discarded Wisconsin proposal, it remains a dangerous precedent.
Technical FeasibilityThen there is also the question of technical feasibility: Blocking all known VPN and proxy IP addresses is a technical whack-a-mole that likely no company can win. Providers add new IP addresses constantly, and no comprehensive blocklist exists. Complying with Utah’s requirements would require impossible technical feats.
The internet is built to, and will always, route around censorship. If Utah successfully hampers commercial VPN providers, motivated users will transition to non-commercial proxies, private tunnels through cloud services like AWS, or residential proxies that are virtually indistinguishable from standard home traffic. These workarounds will emerge within hours of the law taking effect. Meanwhile, the collateral damage will fall on businesses, journalists, and survivors of abuse who rely on commercial VPNs for essential data security.
These provisions won't stop a tech-savvy teenager, but they certainly will impact the privacy of every regular Utah resident who just wants to keep their data out of the hands of brokers or malicious actors.
Uncharted TerritoryLawmakers have watched age-verification mandates fail and, instead of reconsidering the approach, have decided to wage war on privacy itself. As the Cato Institute states:
“The point is that when an internet policy can be avoided by a relatively common technology that often provides significant privacy and security benefits, maybe the policy is the problem. Age verification regimes do plenty of damage to online speech and privacy, but attacking VPNs to try to keep them from being circumvented is doubling down on this damaging approach."
Attacks on VPNs are, at their core, attacks on the tools that enable digital privacy. Utah is setting a precedent that prioritizes government control over the fundamental architecture of a private and secure internet, and it won’t stop at the state’s borders. Regulators in countries outside the U.S. are still eyeing VPN restrictions, with the UK Children’s Commissioner calling VPNs a “loophole that needs closing” and the French Minister Delegate for Artificial Intelligence and Digital Affairs saying VPNs are “the next topic on my list” after the country enacted a ban on social media for kids under 15.
As this law goes into effect next week, we are entering uncharted territory. Lawmakers who can’t distinguish between a security tool and a "loophole" are now writing the rules for one of the most complex infrastructures on Earth. And we can assure that the result won't be a safer internet, only an increasingly less private one.
Open Records Laws Reveal ALPRs’ Sprawling Surveillance. Now States Want to Block What the Public Sees.
Reporters, community advocates, EFF, and others have used public records laws to reveal and counteract abuse, misuse, and fraudulent narratives around how law enforcement agencies across the country use and share data collected by automated license plate readers (ALPRs). EFF is alarmed by recent laws in several states that have blocked public access to data collected by ALPRs, including, in some cases, information derived from ALPR data. We do not support pending bills in Arizona and Connecticut that would block the public oversight capabilities that ALPR information offers.
Every state has laws granting members of the public the right to obtain records from state and local governments. These are often called “freedom of information acts” (FOIAs) or “public records acts” (PRAs). They are a powerful check by the people on their government, and EFF frequently advocates for robust public access and uses the laws to scrutinize government surveillance.
But lawmakers across the country, often in response to public scrutiny of police ALPRs, are introducing or enacting measures aimed at excluding broad swaths of ALPR information from disclosure under these public records laws. This could include whole categories of important information: general information about the extent of law enforcement use; details on ALPR sharing across policing agencies; data on the number of license plate scans conducted, where they happened, and how many “hits” for license plates of interest actually occur; analyses on how many false matches or other errors occur; and images taken of individuals’ own vehicles.
No thanks. Public records and public scrutiny of ALPR programs have shown that people are harmed by these systems and that retained ALPR data violates people’s privacy. In this moment, lawmakers should not be completely cutting off access to public records that document the abuses perpetuated by ALPRs.
Transparency with privacyTo be sure, there are legitimate concerns about wholesale public disclosure of raw ALPR data. After all, many of the harms people experience from these systems are based on the government’s collection, retention, and use of this information. Public transparency rights should not exacerbate the privacy harms suffered by people subjected to ALPR surveillance. But many current proposals do not address legitimate privacy concerns in a measured way, much less seek to harmonize people’s privacy with the public’s right to know.
There is a better path to balancing privacy and transparency rights than outright bans or total disclosure.
Any legislative proposal concerning public access to ALPR data must start with this reality: ALPR data is deeply revealing about where a person goes, and thus about what they are doing and who they are doing it with. That’s a reason why EFF opposes ALPRs. It is dangerous that the police have so much of our ALPR information. Even worse for our privacy would be for police to disclose our ALPR information to our bosses, political opponents, and ex-friends. Or to surveillance-oriented corporations that would use our ALPR information to send us targeted ads, or monetize it by selling it to the highest bidder.
On the other hand, EFF’s firsthand experience using public records from ALPR systems demonstrates the strong accountability value of public access to many kinds of ALPR data, including information like data-sharing reports and network audits. For example, in our “Data Driven” series, we used ALPR data-sharing and hit ratio reports to investigate the extent of ALPR data sharing between police departments and to analyze the number of ALPR scans that are ultimately associated with a crime-related vehicle. We have also identified racist uses of ALPR systems, ALPR surveillance of protestors, and ALPR tracking of a person who sought an abortion. Across the country, municipalities have been shutting down their contracts for ALPR use, often citing concerns with data sharing with federal and immigration agents.
These records are not just informational—they are leverage. Communities, journalists, and local officials have used ALPR disclosures to block new deployments, refuse contract renewals, and terminate existing agreements with surveillance vendors whose practices proved too dangerous to continue. Without this evidentiary record, it is far harder for cities to exercise their procurement power to say no.
It is not always easy to harmonize transparency and privacy when one person wishes to use a public records law to obtain government records that reveal people’s personal information. The best approach is for public records laws to contain a privacy exemption that requires balancing, on a case-by-case basis, of the transparency benefits versus the privacy costs of disclosure. Many do. These provisions of public records laws already accommodate similar concerns about disclosing personal information of private individuals whose information the government may have collected, government employee’s private data, and other personal information.
The balancing provisions in these laws are often flexible and allow for nuance. For example, if a government record contains a mix of information that does not reveal people’s private information and some that does, agencies and courts can disclose the non-private information while withholding the truly private information. This is often accomplished with blacking out, or redacting, the private information.
Applying this privacy-and-transparency balancing to ALPR records, it will often be appropriate for the government to disclose some information and withhold other information. Everybody should generally have access to records showing their own movements and other information captured by ALPRs, but the privacy protections in public records laws should foreclose a single person’s ability to get a copy of similar records about everyone else. And even with accessing your own data, there are complications with shared vehicles that should be considered when balancing privacy and transparency.
An example of where it may be appropriate to release unredacted data and images would be vehicles engaged in non-sensitive government business. For example, a member of the public might use ALPR scans of garbage trucks to identify gaps in service, which would not reveal private information. On other hand, it would be inappropriate to release the scans of a government social worker visiting their clients.
Public records laws should allow a requester to obtain some ALPR information about government surveillance of everyone else, in a manner that accommodates the public transparency interest in disclosure and people’s privacy interests. For example, the best public records laws would disclose the times and places that plate data was collected, but not plate data itself. This can be done, for example, by an agency or court finding that disclosing aggregated and/or deidentified ALPR data protects the privacy or other interests of individuals captured within the data. The best laws recognize that aggregation or de-identification of databases are redactions in service of individual privacy (which responding agencies must do), and are not creating new public records (which responding agencies sometimes need not do).
Likewise, in a government audit log of police searches of stored ALPR data, it will often be appropriate to disclose an officer’s investigative purposes to conduct a search, and the officer’s search terms – but not the search term if it is a license plate number. Many people do not want the world to know that they are under police investigation, and many public records laws generally limit the disclosure of such sensitive facts because of the reputational and privacy harm inherent in that disclosure.
Aggregate ALPR information about, for example, the amount of data collected and error rates can have important transparency value and impact government policy. Requiring the public release of that kind of data contributes to informed public discussion of how our policing agencies do their jobs. This kind of information has been used to study, critique, and provide oversight of ALPR use.
Thus, the wholesale exemption of ALPR information from disclosure under state public records laws would stymie the public’s ability to monitor how their government is using powerful and controversial surveillance technology. EFF cannot support such laws.
Blocking transparencyIn Connecticut, SB 4 is a pending bill that would exclude, from that state’s public records law, information “gathered by” an ALPR or “created through an analysis of the information gathered by” an ALPR. This could ultimately harm individual civilians, who would have less ability to protect themselves from law enforcement that indiscriminately collect vehicle information. Other provisions of this bill would limit government use of ALPRs, and regulate data brokers.
In Arizona, SB 1111 would restrict public access to ALPR data “collected by” an ALPR. The bill would even make it a felony to access or use data from an ALPR (or disseminate it) in violation of this article, which apparently might apply to a member of the public who obtained ALPR data with a public records request. The bill’s author claims it adds “guardrails” for ALPR use.
Earlier this year, Washington state enacted a law that will exempt data “collected by” ALPRs from the state’s public records law. While “bona fide research” will still be a way for some people to obtain ALPR data, this may not include journalists and activists who analyze aggregate data to identify policy flaws. Notably, Washington courts found last year that information generated by ALPR, including images of an individual’s own vehicle, are public records; this new legislation will override that decision, blocking the ability for people to see what photos police have taken of their own vehicles. Other provisions of this new law will limit government use of ALPRs.
A year ago, Illinois’ HB 3339 ended use of that state’s public records law to obtain ALPR information used and collected by the Illinois State Police (ISP), including both information “gathered by an ALPR” and information “created from the analysis of data generated by an ALPR.” This Illinois language for just the ISP is very similar to what is now being considered in Connecticut for all state and local agencies.
Sadly, the list goes on. Georgia exempted ALPR data (both “captured by or derived from” ALPRs) of any government agency from its open records law. Adding insult to injury, Georgia also made it a misdemeanor to knowingly request, use, or obtain law enforcement’s plate data for any purpose other than law enforcement. Maryland exempted “information gathered by” an ALPR from its public information act. Oklahoma exempted from its open records act the ALPR data “collected, retained or shared” by District Attorneys under that state’s Uninsured Vehicle Enforcement Program.
These laws and bills in seven states are an unwelcome national trend.
Next stepsWe urge legislators to reject efforts to amend state public records laws to wholly exempt ALPR information. This would diminish meaningful oversight over these controversial technologies. Public disclosure of some ALPR information is important.
There is a better approach for states that want to harmonize privacy and transparency in the context of ALPR data:
- Open records laws should cover, and not exclude, information collected by ALPRs, and also any public records derived from that information.
- Open records laws should have a privacy exemption that applies to all records, including information collected or derived from ALPRs. That exemption should require a case-by-case balancing of the transparency benefits and privacy costs of disclosure. These provisions work best when agencies and courts can analyze the context of the particular records, the weight of the privacy interests and public interests at stake, and other specific facts to fashion the best balance between these competing values.
- When a document contains both exempt and non-exempt information, open records laws should require disclosure of the latter and withholding of the former. The best public records laws allow agencies to black out, or redact, specific private information while disclosing non-private information in the same records, threading the privacy and transparency needle.
- Finally, in the context of a law enforcement ALPR database (including both data collected by ALPRs and audit logs of police searches of stored ALPR data), the law should permit agencies to disclose aggregated and/or deidentified data, while withholding personally identifiable data. Importantly, the law should recognize that the steps an agency takes to protect individual privacy in ALPR databases should not be construed as creating a new public record.
FOIA balancing standards are one layer in a larger governance stack, and work best alongside strong guardrails on whether and how governments procure ALPR systems in the first place: public debate over vendor contracts, binding surveillance ordinances, strict data‑retention limits, and clear pathways to end ALPR programs entirely where the risks prove too great.
Digital Hopes, Real Power: From Connection to Collective Action
This is the fifth and final installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the rest of the series here.
If the Arab Spring was defined by optimism about what the internet could do, the years since have been marked by a more sober understanding of what it takes to defend it.
Back in 2011, the term “digital rights” was still fairly new. While in the decades prior, open source and hacker communities—as well as a handful of organizations including EFF—had advocated for digital freedoms, it was through the merging of disparate communities from around the world in the 2000s that digital rights came to be more clearly understood as an extension of fundamental human rights.
In 2011, we observed that there were only a few organizations focused on digital rights in the region. Groups like Nawaat, which emerged from the Tunisian diaspora under the Ben Ali regime; the Arab Digital Expression Foundation, formed to promote the creative use of technology; and SMEX, which was initially created to teach journalists and others about social media but has grown to become a powerful force in the region, led the way. Since that time, dozens of organizations have emerged throughout the region to promote freedom of expression, innovation, privacy, and digital security.
Understanding how the digital rights movement evolved in the Middle East and North Africa requires a closer look at the communities that shaped it, and the organizations that are carrying on the fight today. Perspectives from people and organizations that were key to these efforts offer critical insight into how the movement has grown and what challenges lie ahead.
Reem Almasri, a senior researcher and digital sovereignty consultant, says that:
‘Digital rights’ emerged as a term around the Arab Spring, when the internet was still a fairly unregulated space, we were still trying to figure out the tech companies’ policies, and force governments to look at the internet as a fundamental right like water and electricity.
But then the need to converge digital rights to everyday rights—economic, political, social rights—and to connect it to geopolitics has started to be thought about, and to be in discussion as well. And to not look at digital rights as a separate field from everything else that’s affecting it, from the geopolitical context.
Mohamad Najem, who co-founded SMEX in 2008 and has led it to become the largest organization in the region, told me that, at the time, “Nobody gave [social media] a lot of attention in our region.” Their work was “a positive approach to social media, how we can democratize sharing information, how we can share more from civil society, change people’s minds, et cetera.”
“After that phase,” he continues, “we can think about 2012-2013—after the Arab Spring, as an organization we started looking at the infrastructure of the internet, and how freedom of expression and privacy are affected. That’s when we started looking more at what we call digital rights.”
Towards Tech Accountability
In the aftermath of the Arab Spring, social media companies moved from a largely hands-off approach to governance toward more formalized—and often opaque—content moderation systems. Platforms expanded their trust and safety teams and began working more closely with civil society through trusted partnerships in the region and globally. But, Mohamad Najem says:
After the expansion of tech accountability itself and the adaptation of tech companies, we’ve noticed that it’s not taking us anywhere. Gradually we’ve come to a new phase where it feels like tech accountability is an economy by itself that is not leading to real results. So the next phase for us at least and maybe for others in global majority communities is how we can focus on digital public good, how we can push more governments, private and public institutions to adopt more open source software, to look at the ecosystem and understand the US threats happening now, et cetera.
Another group that has played a key role in the fight for digital rights and tech accountability in the region is 7amleh, a Palestinian organization that was founded in 2013. At the time, says Jalal Abukhater:
[I]t was unique and interesting in Palestinian society to have a human rights organization dedicated fully to the topic of digital rights, you know, human rights in a digital format. However, with the years, we saw various milestones, we saw progress of policy decisions and movements through the Israeli government to influence content moderation in Big Tech companies. We saw problems there as an organization.
7amleh took a leading stance in fighting to preserve the digital rights of Palestinians during a period where there was a very strong influence through the Israeli government. There was actually quite important reporting coming through 7amleh on the situation of online content moderation at a time when it wasn’t really a topic being discussed but it was very clearly a situation where there was major influence by government and political suppression happening as a result.
An Ever-Expanding Ecosystem
While in the early days, the digital rights movement attracted specialists, today, people from other fields have recognized how digital rights intersect with their work, and the digital rights community has embraced them.
Almasri says:
Because the digital rights movement has been decentralizing and has stopped being a speciality, it stopped being an exclusive thing for digital rights specialists, since of course the internet not only in the Arab region but all over the world has become a fundamental infrastructure for running any kind of sensitive operations, or operations in general…all types of organizations, and companies, and initiatives are thinking about their digital security, about how internet laws are affecting the use of the internet, or putting them at risk, and how surveillance technologies are affecting their operations.
Abukhater credits the collaborative work that emerged within the region over the years in building the movement’s strength:
[Today], civil society and digital civil society have many forums, many coalitions and networks, but it’s always important to remember that this is work that builds over many years of experience, and relationships, and networks—that it’s different parties coming to support each other at different phases to ensure that this kind of work succeeds and that this ecosystem is sustained globally with support from partner organizations which were very crucial in ensuring that this ecosystem is sustained, especially in Palestine.
Growing Collaborations
Conferences like Bread and Net, first held in Beirut in 2018, and the Palestine Digital Activism Forum (PDAF), first held in Ramallah in 2017, bring activists, academics, journalists, and other practitioners together to network and learn about each other’s work. The pandemic, conflict, and other barriers haven’t stopped either conference from carrying on: PDAF has become an annual virtual event that draws big-name speakers, while Bread & Net has spaced out its meetings but continues to draw bigger crowds each time.
Almasri credits these meetings with expanding the movement beyond the traditional techies and activists who first got involved. “You see a wide spectrum of different fields. You see artists, archivists, journalists joining these conversations, which is definitely on the brighter side of things when it comes to this field, or this scene.”
She also credits the emergence of alliances such as the Middle East Alliance for Digital Rights (MADR, of which EFF is a member), founded in 2020 by individuals and organizations who had been working together for many years to formalize those collaborations.
“Other than the collaborations at the advocacy level, [MADR] creates a sort of pressure point on Big Tech, on content moderation policies, allows for certain coordination at the level of the UN, et cetera, which I see as really positive because it brings some of the redundant efforts together and helps decide on priorities.”
Looking Forward
In thinking about the future of the movement, Almasri and Najem agree that digital rights are no longer a niche. In Najem’s words, “It’s about everything else…it’s about everything.”
Almasri adds:
[W]hen it comes to priorities, things that this scene has been working on, I feel that October 7 [2023] was a big turning point in the way that digital rights activists, researchers, and academics—this field—is looking at digital rights in general. Of course, there is the major question of the need to revise tactics to fight Israel’s tech-enabled genocide that is also empowered by the global economy, big tech, and governments of the world? What alliances should we start building on a regional and global level?
She sees ‘digital sovereignty,’ the ability of people and communities to choose, control, and use technology that serves their needs and values, as one of the next big topics for the movement to tackle, as debates over who owns and hosts our data have sharpened amid revelations that U.S. companies have played a role in regional conflicts.
There have been pockets of debates on how to achieve digital sovereignty, especially from human rights organizations documenting war crimes … There’s an awareness of how the dependence on US-based providers, cloud storage, even hosting infrastructure is a risk, especially after how using these services has been weaponized against the digital existence of certain organizations in the region that have been deplatformed or had their content removed on platforms like Meta and YouTube because their content doesn’t align with the foreign policy of the United States…so it raises a big question about how we look at digital independence, what is the spectrum of independence that civil society in the region can achieve, and in relation to what’s available as well.
Almasri also points to the role of researchers in the region:
There has been a lot more research on the political economy of surveillance technologies, so not only looking at how governments are using them, but their supply chain, who’s investing in these technologies, and how geopolitical networks empowered their proliferation in the hands of governments.
This is where studies looking at the political economy of AI and the military become important, trying to understand how this field of weapons, the military, and AI grew together as part of this global capitalist system rather than looking at these technologies in silos, that is. Looking at the proliferation of these technologies from a geopolitical point of view, looking at the bigger ecosystem rather than zooming in to the specifics of it. I think this has been a big development in the way that we look at digital rights, and the way that digital rights have been converged and integrated into the geopolitical scene.
As the global digital rights community continues to expand, it’s clear that the questions at its core are no longer just about access or expression, but about power—who holds it, how it is exercised, and who is left out of its protections. What began as a fight to keep the internet open has become a broader effort to reimagine it—an effort that is grappling with questions of infrastructure, ownership, and the global inequalities embedded in both.
And yet, despite the scale of these challenges, the movement’s strength lies in the solidarity, the ecosystems, and the networks it has spent more than a decade building. From the early days of the blogging and techie communities to the increasingly powerful digital rights community, advocates in the region have gone up against dictators, endured war and repression, yet remain determined to push forward.
EFF Submission to UN Report on the Role of Media in the Context of Israel’s Policies Toward Palestinians
The UN Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 recently announced a study addressing the killings and attacks against Palestinian journalists and media workers, the destruction of media infrastructure in Gaza, and the production and dissemination of narratives that may enable, justify, or incite international crimes.
As part of this consultation, EFF contributed a submission that identifies a significant deterioration of press freedom and free expression in the period since October 2023, including an increase in censorship and wave of killings of journalists; adding to an already pervasive censorship and surveillance regime for Palestinians.
In particular, concerns raised in our submission relate to:
- Government takedown requests
- Disinformation and content moderation
- Attacks on internet infrastructure
The concerns about censorship in Palestine are ever increasing, and include multiple international forums. Ending the deliberate digital isolation of the Palestinian people is critical to protecting fundamental human rights.
Read the briefing in full here.
Former EFF Activism Director's New Book, Transaction Denied, Explores What Happens When Financial Companies Act like Censors
A U.S. citizen who teaches Persian poetry classes online is suddenly unable to receive payments or access funds when his account is flagged and frozen by Paypal and its subsidiary Venmo. A Muslim city councilwoman in New York City has a Venmo payment blocked because she uses the name of a Bangladeshi restaurant in the transaction. Online hubs for erotic storytelling repeatedly lose their payment accounts. Others active in drug legalization fights struggle to keep their bank accounts.
These may sound like one-off issues, but they are not. They occur with frightening regularity, as former EFF Activism Director and Chief Program Officer, Rainey Reitman, who left EFF in 2022, describes in her new book, Transaction Denied. The book sheds new light on a serious problem that often hides in the shadows, and pushes us to ask an increasingly important question: “Is it ever OK for financial intermediaries to act as the arbiters of online expression?"
Both a storyteller and an advocate, Rainey exposes hidden systems of power that shape our choices, our speech, and, ultimately, our society. - Cindy Cohn
Reitman makes her case about the impact of financial institutions and payment intermediaries shutting down accounts and inhibiting transactions through compelling individual stories, some of which have not been shared before. The people impacted are diverse: authors, teachers, journalists, elected politicians, and more are suddenly unable to retrieve or receive funds, with little explanation, transparency, or recourse. Reitman shows the reasons are frequently speech-related, resulting often from arbitrary corporate policy, a broad (mis)interpretation of the law, or in response to pressure from anti-speech advocates.
In the example of the Persion poetry teacher, the blocking is due to the highly risk averse interpretation of U.S. sanctions on Iran—sanctions aimed at deterring weapons development or terrorism instead snared a poetry professor and a New York city councilwoman. Reitman demonstrates how these sanctions, and others, have an outsized impact on Muslims.
But Transaction Denied is also a guide for those interested in fighting for free speech. The book covers over a decade of successful campaigns and shows that advocacy can win the day—and is sometimes necessary to counter pro-censorship campaigns. Reitman offers a behind-the-scenes view of the campaign to help restore the Stripe account of the Nifty Archive Alliance, a nonprofit which supports the Nifty Archive, a hub of erotic storytelling for the queer community since 1992. She covers EFF's successful coalition and campaign to restore the PayPal account of Smashwords, a hub for self-published fiction. And in what has become a critical moment for free speech and free press, she describes how several EFF staff members and two EFF board members became the seed for a new nonprofit, the Freedom of the Press Foundation, which continues to partner with EFF today in advancing the rights of journalists.
It’s a banner time for books by EFF staff members and friends. If you're concerned about how online privacy has changed over the last three decades, read EFF Executive Director Cindy Cohn's book, Privacy Defender, released in May. (All proceeds from the sale of hard copies of Privacy’s Defender are being donated to EFF, so your book order will help EFF continue fighting for the principles Cindy holds dear.) If you are worried about the individuals trapped in a system where massive financial companies can shut down their individual accounts, effectively locking up their access to money, based entirely on their speech, grab Transaction Denied, released earlier this month, at Beacon Press, Amazon, and Bookshop.org. (Half of the author proceeds go to Freedom of the Press Foundation.)
More likely—you'll want both books on your shelf. Happy reading!
The Open Social Web Needs Section 230 to Survive
If you want to overthrow Big Tech, you’ll need Section 230. The paradigm shift being built with the Open Social Web can put communities back in control of social media infrastructure, and finally end our dependency on enshitified corporate giants. But while these incumbents can overcome multimillion-dollar lawsuits, the small host revolution could be picked off one by one without the protections offered by 230.
The internet as we know it is built on Section 230, a law from the 90s that generally says internet users are legally responsible for their own speech — not the services hosting their speech. The purpose of 230 was to enable diverse forums for speech online, which defined the early internet. These scattered online communities have since been largely captured by a handful of multi-billion dollar companies that found profit in controlling your voice online. While critics are rightly concerned about this new corporate influence and surveillance, some look to diminishing Section 230 as the nuclear option to regain control.
The thing is, that would be a huge gift to Big Tech, and detrimental to our best shot at actually undermining corporate and state control of speech online.
Dethroning Big TechWe’re fed up with legacy social media trapping us in walled gardens, where the world's biggest companies like Google and Meta call the shots. Our communities, and our voices, are being held hostage as billionaires’ platforms surveil, betray, and censor us. We’re not alone in this frustration, and fortunately, people are collaborating globally to build another way forward: the Open Social Web.
This new infrastructure puts the public’s interest first by reclaiming the principles of interoperability and decentralization from the early internet. In short, it puts protocols over platforms and lets people own their connections with others. Whether you choose a Fediverse app like Mastodon or an ATmosphere app like Bluesky, your audience and community stay within reach. It’s a vision of social media akin to our lives offline: you decide who to be in touch with and how, and no central authority can threaten to snuff out those connections. It’s social media for humans, not advertisers and authoritarians.
Behind that vision is a beautiful mess of protocols bringing the open social media web to life. Each protocol is a unique language for applications, determining how and where messages are sent. While this means there is great variety to these projects, it also means everyone who spins up a server, develops an app, or otherwise hosts others’ speech has skin in the game when it comes to defending Section 230.
What exactly is Section 230?Section 230 protects freedom of expression online by protecting US intermediaries that make the internet work. Passed in 1996 to preserve the new bubbling communities online, 230 enshrined important protections for free expression and the ability to block or filter speech you don’t want on your site. One portion is credited as the “26 words that created the internet”:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In other words, this bipartisan law recognizes that speech online relies on intermediaries — services that deliver messages between users — and holding them potentially liable for any message they deliver would only stifle that speech. Intuitively, when harmful speech occurs, the speaker should be the one held accountable. The effect is that most civil suits against users and services based on others' speech can quickly be dismissed, avoiding the most expensive parts of civil litigation.
Section 230 was never a license to host anything online, however. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims.
What Section 230 has enabled, however, is the freedom and flexibility for online communities to self-organize. Without the specter of one bad actor exposing the host(s) to serious legal threats, intermediaries can moderate how they see fit or even defer to volunteers within these communities.
Why the Open Social Web Needs Section 230The superpower of decentralized systems like the Fediverse is the ability for thousands of small hosts to each shoulder some of the burdens of hosting. No single site can assert itself as a necessary intermediary for everyone; instead, all must collaborate to ensure messages reach the intended audience. The result is something superior to any one design or mandate. It is an ecosystem that is greater than the sum of its parts, resilient to disruptions, and free to experiment with different approaches to community governance.
The open social web’s kryptonite though, is the liability participants can face as intermediaries. The greater the potential liability, the more interference from powerful interests in the form of legal threats, more monetary costs, and less space for nuance in moderation. And in practice, participants may simply stop hosting to avoid those risks. The end result is only the biggest and most resourced options can survive.
This isn’t just about the hosts in the Open Social Web, like Mastodon instances or Bluesky PDSes. In the U.S., Section 230’s protections extend to internet users when they distribute another person’s speech. For example, Section 230 protects a user who forwards an email with a defamatory statement. On the open social web, that means when you pass along a message to others through sharing, boosting, and quoting, you’re not liable for the other user’s speech. The alternative would be a web where one misclick could open you up to a defamation lawsuit.
Section 230 also applies to the infrastructure stack, too, like Internet service providers, content delivery networks, domain, and hosting providers. Protections even extend to the new experimental infrastructures of decentralized mesh networks.
Beyond the existential risks to the feasibility of indie decentralized projects in the United States, weakening 230 protections would also make services worse. Being able to customize your social media experience from highly curated to totally laissez-faire in the open social web is only possible when the law allows space for private experiments in moderation approaches. The algorithmically driven firehose forced on users by antiquated social media giants is driven by the financial interests of advertisers, and would only be more tightly controlled in a post-230 world.
Defending 230Laws aimed at changing 230 protections put decentralized projects like the open social web in a uniquely precarious position. That is why we urge lawmakers to take careful consideration of these impacts. It is also why the proponents and builders of a better web must be vigilant defenders of the legal tools that make their work possible.
The open social web embodies what we are protecting with Section 230. It’s our best chance at building a truly democratic public interest internet, where communities are in control.
The GUARD Act Isn’t Targeting Dangerous AI—It’s Blocking Everyday Internet Use
Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.
Tell Congress: oppose the guard act
If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.
The concerns behind this bill are serious. There have been troubling reports of AI systems engaging in harmful interactions with young users, including cases involving self-harm. Those risks deserve attention. But they call for targeted solutions, like better safeguards and enforcement against bad actors, not sweeping restrictions. The bill’s sponsors say they’re targeting worst-case scenarios — but the bill regulates everyday use.
The GUARD Act’s Broad Definitions Reach Everyday ToolsThe problem starts with how the bill defines an “AI chatbot.” It covers any system that generates responses that aren’t fully pre-written by the developer or operator. Such a broad definition sweeps in the basic functionality of all AI-powered tools.
Then there’s the definition of an “AI companion,” which minors are banned from using entirely. An AI companion is any chatbot that produces human-like responses and is designed to “encourage or facilitate” interpersonal or emotional interaction. That may sound aimed at simulated “friends” or therapy chatbots. But in practice, it’s much fuzzier.
Modern chatbots are designed to be conversational and helpful. A homework helper might say “good question” before walking a student through a problem. A customer service chatbot may respond empathetically to a complaint (“I’m sorry you’re having this problem.”) A general-purpose assistant might ask follow-up questions. All of these could be seen as facilitating “interpersonal” interaction — and triggering the GUARD Act.
Faced with steep penalties and unclear boundaries, companies are unlikely to take chances on letting young people use their online tools. They’ll block minors entirely or strip their tools down to something less useful for everyone. The result isn’t a narrow safeguard—it’s a broad restriction on everyday online interactions.
Homework Question? Show ID And Call Your ParentsStart with a student getting help with homework. Under the GUARD Act, the service must verify the user’s age using more than a simple checkbox—it must rely on a “reasonable age verification” measure, which could require a government ID or a third-party age-checking system. If the system decides a user is under 18, the company must decide if its tool qualifies as an “AI Companion.” If there’s any risk it does, the safest move is to block access entirely.
The same logic applies to everyday customer service. A teenager trying to fix an order issue gets routed to a chatbot, and the company faces a choice: build a full age-verification system for a routine interaction, or restrict access to avoid liability. Many will choose the latter.
This isn’t a narrow restriction aimed at a few risky products. It’s a compliance regime that pushes companies to block or limit any product that generates text for minors, across the board.
ID Checks for EveryoneThe GUARD Act doesn’t just affect minors. The bill takes a big step towards an internet that only works when users are willing to upload a valid ID or comply with other invasive age-verification schemes. Companies must verify the age of every user—not through a simple self-declaration, but through a “reasonable age verification” system tied to the individual.
In practice, that means collecting sensitive personal information: government IDs, financial data, or biometric identifiers. Companies can outsource verification, but they remain legally responsible. And the law requires ongoing verification, so this isn’t a one-time check. Worse, studies consistently show that millions of people have outdated information on their IDs, such as an old address, or do not have government ID. Should services require ID, many folks without current or any ID will be shut out.
And for those who do have compliant ID, turning over this information repeatedly creates obvious risks. Databases of sensitive identity information become targets for breaches. Anonymous or pseudonymous use of online tools becomes harder or impossible.
To keep minors away from certain chatbots, the GUARD Act would require everyone to prove who they are just to use basic online tools. That’s a steep tradeoff. And it doesn’t actually address the specific harms the bill is supposed to solve.
Vague Definitions, Huge PenaltiesThe GUARD Act’s broad scope is enforced with steep penalties. Companies can face fines of up to $100,000 per violation, enforced by federal and state officials. At the same time, key terms like “AI companion” rely on vague concepts such as “emotional interaction.” That combination will lead to overblocking. Faced with legal uncertainty and serious liability, companies won’t parse small distinctions. They’ll restrict access, limit features, or block minors entirely.
That is the unfortunate result of the GUARD Act, even though the concerns animating it are worthy of fixing. But the GUARD Act’s broad terms will apply far beyond the concerning scenarios.
In the end, that means a more restricted and more surveilled internet. Teenagers would lose access to tools they rely on for school and everyday tasks. Everyone else faces new barriers, including ID checks. Smaller developers, who aren’t able to absorb compliance costs and legal risk, would be pushed out, leaving the largest companies even more dominant.
Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn’t do that. It trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark.
Congress could act soon. Tell them to reject the GUARD Act.
Tell Congress: say no to mandatory online id checks
Congress Must Reject New Insufficient 702 Reauthorization Bill
Speaker Johnson has introduced a new fig leaf over the American surveillance state, the Foreign Intelligence Accountability Act. Introduced with only days to go before Section 702 of the Foreign Intelligence Surveillance Act (FISA) expires and the U.S. government loses one of its most invasive surveillance programs, the bill does nothing to make any of the substantial changes privacy advocates have been asking for --- most notably, it fails to give us a real warrant requirement for the FBI to snoop through the private conversations of people on U.S. soil.
Section 702 needs to be reauthorized by Congress every few years. These reauthorizations give us a chance to tinker with the language of the law and introduce some much-needed reforms. This attempt at reauthorization has been particularly fraught, but there is still time for Congress to include real protection for Americans’ civil liberties and rights. We need to make sure that when an FBI agent wants to look through Americans’ conversations scooped up as part of a national security intelligence program, they need a warrant signed by a judge just as if they were trying to search your email account or your house.
This new bill mandates that a civil liberties protection officer at the Director of National Intelligence review all queries of U.S. persons made by the FBI under this program to make sure no laws have been broken. It’s bad enough to let the intelligence community police itself, and what’s more, the assessment for illegality would be made after a U.S. person has already been spied on. This is hardly the reform we need and will likely just lead to continued abuse with no real accountability or consequences.
The bill “prohibits targeting United States persons,” but so does current law. This “change” does absolutely nothing to address what’s really happening—which is that surveillance of people in the United States is usually justified as “incidental” because Americans aren’t the “target” of the surveillance. The bill does not create a warrant requirement, it does not create any new transparency requirements, and it does not protect Americans’ privacy.
We urge Congress, and we urge you to write to your Congresspeople, to tell them this: Reject the surveillance state’s latest smokescreen known as the Foreign Intelligence Accountability Act and keep pushing for real reforms.
The Internet Still Works: SmugMug Powers Online Photography
SmugMug is a family-owned photo hosting and e-commerce platform that helps professional photographers run their businesses online. Founded in 2002, the company provides tools for photographers to show their work, deliver client galleries, sell prints, and manage payments.
In 2018, SmugMug purchased Flickr, the long-running photo-sharing community, which added tens of millions of active hobbyist photographers to the company’s user base.
Ben MacAskill is President and COO of SmugMug’s parent company, Awesome, which he co-founded with his family. Awesome also includes the media network This Week in Photo and the nonprofit Flickr Foundation, which focuses on preserving publicly available photography. MacAskill has been an active voice in policy discussions around Section 230 and online platform regulation. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.
Joe Mullin: How would you explain Section 230 to a SmugMug photographer who hasn't heard of it but relies on you to share their work, run their business.
Ben MacAskill: Section 230 allows us to run our business. We are a small, family run business. We don’t have the resources to police every single upload, every single comment, or every single engagement that happens on the site.
That includes photographers who have comments on their sites. Anywhere there’s interaction online, Section 230 protects us.
It doesn't absolve us of liability. We can't run rampant and do anything we want. It just helps protect us and make it scalable so that we can run our business.
What would you have to change if Section 230 were eliminated or significantly narrowed?
Honestly, there's a high chance that it would bankrupt platforms like ours. They're not wildly profitable. If Section 230 is done away with, we have to [check] content that goes online to make sure we’re not liable. That means policing tens of millions of uploads per day.
That would kill the business of a lot of photographers. Can you imagine—you just got married, and you’re waiting for your wedding photos for a week or two because they’re in some moderation queue?
If we don’t have legal protections, and we get one nefarious customer—if something goes sideways—then I’m liable for that.
I don't, and can't possibly know, whether every single photo is appropriate or legal, as it's uploaded. We would literally have to moderate everything before it goes online. I don’t think any business can afford that, period. I guess you could have an offshore call-center type thing. Still, it would change the entire nature of the real-time internet. Imagine posting something to Instagram and having the platform say, “Cool, we’ll get back to you in 8 to 12 days.”
What kind of content moderation do you do on SmugMug?
If a user uploads something illegal, we will report them as soon as we find it. We're not protecting them. We don’t condone or allow illegal behavior. We work very closely with organizations, nonprofits and governmental agencies to detect CSAM—child exploitative material—and we report that to the National Center for Missing and Exploited Children. We will report users, we eliminate illegal content on our platforms—which is one reason we have such a low prevalence of that problem.
But that does take effort and time to find, and there is currently no perfect solution. The tech solutions that exist can’t detect it at 100% accuracy, or anywhere close. And with tens of millions of uploads a day, going through them one by one is impossible.
How do you think more generally about protecting user speech and creative expression?
On SmugMug, we’re really focusing on professionals running their business. So we don’t have to [weigh in] on content too much.
On Flickr, we are big proponents of expression and artistic creativity. Photographers have opinions! But we do draw the line at things like hate speech and harassment. We aggressively maintain a friendly platform. Our community guidelines are very specific, that you cannot harass other customers, you cannot upload stuff classified as hate speech, or threats, or anything along those lines.
Those rules are generally policed by the community. We do have some text analysis tools, but when community members feel harassed or threatened, reports will come in. We’ll address them on a one-by-one basis and remove harassing material from our platform.
Our ability to moderate is one of the things that makes Flickr what it is. If we lose the ability to enforce our own moderation rules—or have that legislated for us—then it changes the entire nature of the community. And not in a good way. Losing the ability to moderate would permanently and forever change what we've built.
What kind of complaints or takedown requests do you receive, and how do you handle it, both in the U.S. and abroad?
Flickr is often referred to as the friendliest community online. You know, we're not dealing with a lot of hate. We're not dealing with a lot of threats. Under other frameworks, like the DMCA, we do takedowns on copyrighted material.
We’re able to handle it with a fully internal team, and we have a great track record. But the user base and the content base is so large that, if we had to assume that those tens of millions of uploads a day are problematic, the burden would be extreme.
We have a robust Trust and Safety Team, and we operate in every non-embargoed country on Earth. So we are subject to a lot of different laws and regulations: “likeness” rules and privacy rules in certain countries that don't exist here in the United States. Even state to state, there’s some varying laws. It’s a complicated framework, but we pay attention to it.
The globe responds in much the same way that Section 230 is working. That is, we operate on reports and discovery, not on pre-screening everything.
What do you think that policy makers most often misunderstand about how platforms like yours operate?
One misconception is that we are not beholden to any laws. That Section 230 absolves us of any responsibility and any liability, and we can just do whatever we want. They talk about it as “reining in tech companies,” or “holding tech companies accountable.” But I am accountable for the content on my platform. We’re not given this “get out of jail free” card.
And I think they assume all platforms don’t really care about this, that anything that is done is done begrudgingly. But we’re very proactive about keeping a clean, polite, and friendly community. We are already very aggressively policing our platform.
And even legal content gets moderated, because it might just not be appropriate for a particular community.
We enforce our rules, and much the way that other private in-person businesses will enforce their rules. If you start screaming hateful things at patrons in a coffee shop, they’re going to throw you out. They want a quiet, chill vibe where people can sip their lattes. We’re doing the same sort of things.
As an independent family owned company you’re in an ecosystem dominated by much larger platforms. How are these issues different for you as a smaller service?
I think it's a much more existential threat for middle and small tech companies. It also shuts off the next generation of these platforms. The computer science student in a dorm room right now won't have the legal protections to launch, to even try to build something new. At least not here in the United States.
Act Now to Stop California’s Paternalistic and Privacy-Destroying Social Media Ban
California lawmakers are fast-tracking A.B. 1709—a sweeping bill that would ban anyone under 16 from using social media and force every user, regardless of age, to verify their identity before accessing social platforms.
That means that under this bill, all Californians would be required to submit highly sensitive government-issued ID or biometric information to private companies simply to participate in the modern public square. In the name of “safety,” this bill would destroy online anonymity, expose sensitive personal data to breach and abuse, and replace parental decision-making with state-mandated censorship.
A.B. 1709 has already passed out of the Assembly Privacy and Judiciary Committees with nearly unanimous support. Its next stop is the Assembly Appropriations Committee, followed by a floor vote—likely within the next week.
Tell Your Representative to OPPOSE A.B. 1709
California Is About to Set a Dangerous Precedent for Online CensorshipBy banning access to social media platforms for young people under 16, California is emulating Australia, where early results show exactly what EFF and other critics predicted: overblocking by platforms, leaving youth without support and even adults barred from access; major spikes in VPN use and other workarounds ranging from clever to desperate; and smaller platforms shutting down rather than attempting costly compliance with these sweeping bills.
California should not be racing to replicate those failures. After all, when California leads—especially on tech—other states follow. There is no reason for California to lead the nation into an unconstitutional social media ban that destroys privacy and harms youth.
Tell Your Representative to OPPOSE A.B. 1709
What’s Wrong With A.B. 1709?Just about everything.
A.B. 1709 weaponizes legitimate parental concerns by using them to hand over even more censorship and surveillance power to the government. Beneath its shiny “protect the children” rhetoric, this bill is misguided, unconstitutional, and deeply harmful to users of all ages.
A.B. 1709 Recklessly Violates Free Speech RightsThe First Amendment protects the right to speak and access information, regardless of age. But by imposing a blanket ban on social media access, A.B. 1709 would cut off lawful speech for millions of California teenagers, while also forcing all users (adults and kids alike) to verify their ages before speaking or accessing information on social media. This will immensely and unconstitutionally chill Californians’ exercise of their First Amendment.
These mandates ignore longstanding Supreme Court precedent that protects young people’s speech and consistently find these bans unconstitutional. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks of online engagement. California simply does not have a valid interest in overriding parents’ and young people’s rights to decide for themselves how to use social media.
After all, age-verification technology is far from perfect. A.B. 1709’s reliance on imperfect age-verification technology will disproportionately silence marginalized communities—those whose IDs don’t match their presentation, those with disabilities, trans and gender non-conforming folks, and people of color—who are most likely to be wrongfully denied access by discriminatory systems.
Finally, many people will simply refuse to give up their anonymity in order to access social media. Our right to anonymity has been a cornerstone of free expression since the founding of this country, and a pillar of online safety since the dawn of the internet. This is for good reason: it allows creativity, innovation, and political thought to flourish, and is essential for those who risk retaliation for their speech or associations. A.B. 1709 threatens to destroy it.
AB 1709 Needlessly Jeopardizes Everyone’s PrivacyA.B. 1709’s age verification mandate also creates massive security risks by forcing users to hand over immutable biometric data and government IDs to third-party vendors. By creating centralized "honeypots" of sensitive information, the bill invites identity theft and permanent surveillance rather than actual safety. If we don’t trust tech companies with our private information now, we shouldn't pass a law that mandates we give them even more of it.
We’ve already seen repeated data breaches involving age- and identity-verification services. Yet A.B. 1709 would require millions more Californians—including the youth this bill claims to protect—to feed their most sensitive data into this growing surveillance ecosystem.
This is not the answer to online safety.
Tell Your Representative to OPPOSE A.B. 1709
AB 1709 Harms the Youth It Claims to ProtectWhile framed as a safety measure, this bill serves as a blunt instrument of censorship, severing vital lifelines for California’s young people. Besides being unconstitutional, banning young people from the internet is bad public policy. After all, social media sites are not just sources of entertainment; they provide crucial spaces for young people to explore their identities—whether by creating and sharing art, practicing religion, building community, or engaging in civic life.
Social science indicates that moderate internet use is a net positive for teens’ development, and negative outcomes are usually due to either lack of access or excessive use. Social media provides essential spaces for civic engagement, identity exploration, and community building—particularly for LGBTQ+ and marginalized youth who may lack support in their physical environments. By replacing access to political news and health resources with state-mandated isolation, A.B. 1709 ignores the calls of young people themselves who favor digital literacy and education over restrictive government control.
Young people have been loud and clear that what they want is access and education—not censorship and control. They even drafted their own digital literacy education bill, A.B. 2071, which is currently before the California legislature! Instead of cutting off vital lifelines, we should support education measures that would arm them (and the adults in their lives) with the knowledge they need to explore online spaces safely.
AB 1709 Is Misguided and Won’t WorkIn case you needed more reasons to oppose this bill.
- A.B. 1709 Replaces Parenting With Government Control. Families know there is no one-size-fits-all solution to parenting. But AB 1709 imposes one anyway, overriding parental decision-making with a blanket censorship prohibition. Parents who want to actively guide their children’s online experiences should be empowered, not relegated to the sidelines by a blunt state mandate.
- A.B. 1709 Strengthens Big Tech Instead of Challenging It. Supporters claim that this bill will rein in the major tech companies, but in fact, steep fines and costly compliance regimes disproportionately harm smaller platforms. Where large corporations can afford to absorb legal risk and shell out for expensive verification systems, smaller forums and emerging platforms cannot. We’ve already seen platforms shut down or geoblock entire states in response to age-gating laws. And when the small platforms shutter, where do all of those users—and their valuable data—go? Straight back to the biggest companies.
- A.B. 1709 Creates Expensive and Shady Bureaucracy During a Budget Crisis. California is facing a massive deficit, but A.B. 1709 would waste taxpayer dollars to fund a shadowy new "e-Safety Advisory Commission" to enforce this ban and dream up new ways to censor the internet. In addition, lawmakers in support of A.B. 1709 have already admitted that this bill is likely to follow the same path as other recent "child safety" laws that were struck down or blocked in court for First Amendment and privacy reasons. With A.B. 1709, taxpayers are being asked to hand over a blank check for millions in legal fees to defend a law that is unconstitutional on its face.
A.B. 1709 is not an inevitability, as some supporters want you to believe. But we need to act now to support our youth and their right to participate in online public life.
Your representatives could vote on A.B. 1709 as soon as next week. If you’re a Californian, email your legislators now and tell them to vote NO on AB 1709.
EFF Challenges Secrecy In Eastern District of Texas Patent Case
Clinic students Emily Ko and Zoe Lee at the Technology Law and Policy Clinic at the NYU School of Law were the principal authors of this post.
Courts are not private forums for business disputes. They are public institutions, and their records belong to the public. But too often, courts forget that and allow for massive over-sealing, especially in patent cases.
EFF recently discovered another case of this in the Eastern District of Texas, where key court filings about Wi-Fi technology used by billions of people every day were hidden entirely from public view. The public could not see the parties’ arguments about patent ownership, the plaintiff’s standing in court, or licensing obligations tied to standardized technologies.
EFF Seeks to Uncover Sealed Information in WilusThe case Wilus Institute of Standards and Technology Inc. v. HP Inc., highlights a recurring transparency problem in patent litigation.
Wilus claims to own standard essential patents (SEPs) related to Wi-Fi 6 — technology embedded in everyday devices. Wilus sued Samsung and HP for patent infringement. HP argued that Wilus failed to offer licenses on Fair, Reasonable, and Non-Discriminatory (FRAND) terms, which are required to prevent SEP holders from exploiting their position, by blocking fair access to widely used technologies.
In reviewing the docket, EFF found that many filings were improperly sealed under a lenient protective order without the required, specific justification needed in a proper motion to seal. Because there is a presumption of public access to court filings, litigants must file a motion to seal and demonstrate compelling reasons for secrecy. This typically requires a document-by-document and line-by-line justification.
In the Eastern District of Texas, that standard is often not enforced. Instead, district judges allow litigants to hide information using boilerplate justification in a protective order without explaining why specific documents or specific parts in a document should be hidden.
In Wilus, two sets of documents stood out.
First, Samsung moved to dismiss the case, arguing Wilus may not have validly obtained the patents — raising doubts about whether they had standing to sue at all. Wilus’s opposition to that motion was filed completely under seal, with no redacted public version available at all. That briefing likely addresses the patent assignment agreements that underpin Wilus’s business model — information the public has an interest in, especially in cases involving non-practicing entities (NPEs) like Wilus.
Second, filings related to HP’s supplemental briefing on FRAND obligations were also sealed in full, with no redacted versions available to the public. Whether Wilus is bound by FRAND has implications far beyond this case. Companies subject to FRAND must adhere to reasonable licensing terms, while those that are not can charge significantly higher licensing fees.
In both instances, the public was shut out of arguments that bear directly on how essential technologies are licensed and controlled.
EFF Pushes For Public AccessEFF raised these concerns with Wilus’s counsel and pressed for public access to the sealed records. Wilus ultimately agreed to file redacted versions of several documents now available as Document Numbers 387, 388, and 389.
That result is progress, but it shouldn’t require outside intervention. Public versions of court filings should be the default, not something negotiated after outside pressure.
Even now, these newly filed redacted versions conceal significant portions of the parties’ arguments. The public still cannot fully see how this case about technologies that are used every day is being litigated.
Why Public Access MattersSealing court records is designed to be rare. To overcome the presumption of public access, litigants must show compelling reasons for secrecy. That’s because open courts are a distinguishing feature of American democracy. The public, journalists, and policymakers all have the right to observe proceedings and hold both government actors and private litigants accountable.
Some filings do contain trade secrets or commercially sensitive information. But that doesn’t mean litigants should be able to hide information without explaining why. The Eastern District of Texas allows litigants to bypass the requirement to explain why.
EFF confronted this very same issue in its attempt to intervene in another Eastern District of Texas case, Entropic v. Charter. The same pattern appeared again in Wilus: instead of narrowly tailored redactions supported by specific reasoning, filings were withheld wholesale.
Courts Must Enforce the StandardCourts, not third parties, are responsible for protecting the public’s right of access.
That means enforcing the “compelling reasons” standard, as a matter of course. Parties seeking to seal sensitive information should be required to justify each proposed redaction. The Eastern District of Texas’ current approach falls short. By allowing broad, unsupported sealing through expansive protective orders, it effectively treats judicial records as confidential by default.
Heavy caseloads don’t change the rule. Administrative burden cannot override constitutional and common law rights. Judicial records are presumptively public. Courts, including the Eastern District of Texas, should enforce that presumption.
Other Federal Courts Get It RightThe Eastern District of Texas is an outlier. In the Northern District of California, judges routinely reject overbroad sealing requests. As Judge Chhabria’s Civil Standing Order explains:
[M]otions to seal . . . are almost always without merit. . . . Federal courts are paid for by the public, and the public has the right to inspect court records, subject only to narrow exceptions.
The filing party must make a specific showing explaining why each document that it seeks to seal may justifiably be sealed . . . Generic and vague references to “competitive harm” are almost always insufficient justification for sealing.
This approach reflects the law: sealing must be narrowly tailored and specifically justified.
Court Transparency is FundamentalAt first glance, secrecy in patent litigation may not seem alarming. But it signals a broader erosion of transparency. The widespread use of expansive protective orders in the Eastern District of Texas is a practice that risks spreading if courts do not enforce the law.
These practices allow private parties to obscure information about disputes involving technologies that shape modern life. That undermines a core principle of a free society: transparency regarding the actions of powerful actors.
Courts are not private forums for business disputes. They are public institutions, and their records belong to the public.
So long as these practices continue, EFF will keep advocating for transparency and working to vindicate the public’s right to access court records.
California Coastal Community Must Reject CBP's AI-Powered Surveillance Tower
Customs and Border Protection (CBP) is seeking permission from the California city of San Clemente to install an Anduril Industries surveillance tower on a cliff that would allow for constant monitoring of entire coastal neighborhoods.
The proposed tower is Anduril's Sentry, part of the Autonomous Surveillance Tower (AST) program. While CBP says it will primarily monitor the coastline for boats carrying migrants, it will actually be installed 1.5 miles inland, overlooking the bulk of the 62,000-resident city. By CBP's own public statement, the system–which combines video, radar, and computer vision–is "constantly scanning" for movement and identifying and tracking objects an AI algorithm decides are of interest. Depending on the model–the photos provided by CBP indicate it is a long range maritime model–the camera could see as far as nine miles, which would cover the entire city and potentially see as far as neighboring Dana Point.
"The AST utilize advanced computer vision algorithms to autonomously detect, identify, and track items of interest (IoI) as they transit through the towers field of view," CBP writes in a privacy threshold analysis. "The system can determine if an IoI is a human, animal, or vehicle without operator intervention. The system then generates and transmits an alert to operators with the location and images of the IoI for adjudication and response."
On April 28, local residents and Oakland Privacy, a privacy- and anti-surveillance-focused citizens’ coalition, are holding a town hall to inform the public about the dangers of this technology. We urge people to attend to better understand what's at stake.
"The planned deployment of an Anduril tower along a heavily used Orange County coastline 75 miles from the border demonstrates that the militarization of the border region is rapidly moving northwards and across the entire state," writes Oakland Privacy.
City officials raised concerns about resident privacy and proposed that a lease agreement include a prohibition on surveilling neighborhoods. CBP rejected that proposal, instead saying that they would configure the tower to "avoid" scanning residential neighborhoods, but the system would remain capable of tracking human beings in residential areas. According to the staff report:
In response to privacy concerns, CBP has stated the system would be configured to avoid scanning residential areas that fall into the scan viewshed, focusing the system on the marine environment. CBP has maintained the purpose of the system is specifically maritime surveillance, and the system would be singularly focused on offshore activities. However, there may be an instance in which there is an active smuggling event, detected by the system at sea, in which the subsequent smuggling event traverses through the residential neighborhoods. In such a case, the system may continue to track and monitor. To restrict this functionality would be contrary to the spirit and intent of the deployment. Therefore, they cannot make such a contractual obligation.
The Anduril towers retain a variety of data, including images and more.
The proposed Anduril surveillance tower. Source: City of San Clemente
"The AST capture and retain imagery which occurs in plan view of the tower sites and is stored as an individual event with a unique event identified allowing replay of the event for further investigation or dismissal based on activity occurring," according to the private threshold analysis.
The document indicates a potential 30-day retention period for imagery, but then contradicts itself to say that data will be held indefinitely to train algorithms: "AST will also be maintaining learning training data, these records should not be deleted." This means that taxpayers would be paying for the privilege of having their data turned into fuel for Anduril's product.
In 2020 CBP said it would work with National Archives and Records Administration (NARA) to develop a retention schedule for training data (i.e., a timeline for deletion). However, when EFF filed a Freedom of Information Act (FOIA) with NARA, the agency said there were no records of these discussions. Likewise, CBP has not provided records in response to the FOIA request EFF filed with them seeking the same records.
Anduril Maritime Sentry in San Diego, where the border fence meets the ocean.
This would not be the first CBP tower placed along the coastline in California. EFF identified one in Del Mar, about 30 miles from the border, and another in San Diego County where the border fence meets the Pacific Ocean. CBP has also applied to place towers–although not necessarily the Anduril model–in or near several other coastal locations: Gaviota State Park, Refugio State Park, Vandenberg Air Force Base, Piedras Blancas and Point Vicente. The California coastline isn’t the only coastline dotted with surveillance towers. The Migrant Rights Network has also documented numerous Anduril towers along the southeast coast of England. Where the San Clemente tower would differ is that there is a substantial population between the tower and the beach, and because it's a 360-degree system, it can watch neighborhoods even further from the coast.
However, this won't be the first time an Anduril tower has been placed next to a community. EFF has documented numerous Anduril towers in public parks along the Rio Grande in Laredo and Roma, Texas. In Mission, Texas, an Anduril tower was placed outside an RV park: the tower could not even see the border without capturing data from the community. Because AI can swivel the cameras 360 degrees, two churches were within the "viewshed" of that tower.
Click here to view EFF's ongoing map of CBP surveillance towers.
Many border surveillance towers are placed on city or county property, requiring a lease to be approved by the local governing body–as is the case with San Clemente. In 2024, EFF and Imperial Valley Equity and Justice organized an effort to fight the renewal of a Border Patrol's lease for a tower next to a public park. The coalition lost narrowly after a recall election ousted two officials who were critical of the lease.
CBP is rapidly increasing the number of towers at the border and beyond, recently announcing the potential to install 1,500 more towers in the next few years–more than tripling what we've documented so far–at a cost of more than $400 million to the public for maintenance alone. This is despite more than 20 years of government reports that have documented how tower-based systems are ineffective and wasteful.
It's time to fight back.
EFF to 9th Circuit (Again): App Stores Shouldn’t Be Liable for Processing Payments for User Content
EFF filed an amicus brief for the second time in the U.S. Court of Appeals for the Ninth Circuit, arguing that allowing cases against the Apple, Google, and Facebook app stores to proceed could lead to greater censorship of users’ online speech.
Our brief argues that the app stores should not lose Section 230 immunity for hosting “social casino” apps just because they process payments for virtual chips within those apps. Otherwise, all platforms that facilitate financial transactions for online content—beyond app stores and the apps and games they distribute—would be forced to censor user content to mitigate their legal exposure.
Social casino apps are online games where users can buy virtual chips with real money but can’t ever cash out their winnings. The three cases against Apple, Google, and Facebook were brought by plaintiffs who spent large sums of money on virtual chips and even became addicted to these games. The plaintiffs argue that social casino apps violate various state gambling laws.
At issue on appeal is the part of Section 230 that provides immunity to online platforms when they are sued for harmful content created by others—in this case, the social casino apps that plaintiffs downloaded from the various app stores and the virtual chips they bought within the apps.
Section 230 is the foundational law that has, since 1996, created legal breathing room for internet intermediaries (and their users) to publish third-party content. Online speech is largely mediated by these private companies, allowing all of us to speak, access information, and engage in commerce online, without requiring that we have loads of money or technical skills.
The lower court hearing the case ruled that the companies do not have Section 230 immunity because they allow the social casino apps to use the platforms’ payment processing services for the in-app purchasing of virtual chips.
However, in our brief we urged the Ninth Circuit to reverse the district court and hold that Section 230 does apply to the app stores, even when they process payments for virtual chips within the social casino apps. The app stores would undeniably have Section 230 immunity if sued for simply hosting the allegedly illegal social casino apps in their respective stores. Congress made no distinction—and the court shouldn’t recognize one—between hosting third-party content and processing payments for the same third-party content. Both are editorial choices of the platforms that are protected by Section 230.
We also argued that a rule that exposes internet intermediaries to potential liability for facilitating a financial transaction related to unlawful user content would have huge implications beyond the app stores. All platforms that facilitate financial transactions for third-party content would be forced to censor any user speech that may in any way risk legal exposure for the platform. This would harm the open internet—the unique ability of anyone with an internet connection to communicate with others around the world cheaply, easily, and quickly.
The plaintiffs argue that the app stores could preserve their Section 230 immunity by simply refusing to process in-app purchases of virtual chips. But the plaintiffs’ position fails to recognize that other platforms don’t have such a choice. Etsy, for example, facilitates purchases of virtual art, while Patreon enables artists to be supported by memberships. Platforms like these would lose Section 230 immunity and be exposed to potential liability simply because they processed payments for user content that a plaintiff argues is illegal. That outcome would threaten the entire business models of these services, ultimately harming users’ ability to share and access online speech.
The app stores should be protected by Section 230—a law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on—irrespective of their role as payment processors.
