EFF: Updates
A Win for Encryption: France Rejects Backdoor Mandate
In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.
The vote is a victory for digital rights, for privacy and security, and for common sense.
The proposed law was a surveillance wishlist disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited "ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.
The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.
A Global SignalFrance’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.
As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.
This victory was not inevitable. It came after sustained public pressure, expert input, and tireless advocacy from civil society. It shows that pushing back works. But for the foreseeable future, misguided lobbyists for police national security agencies will continue to push similar proposals—perhaps repackaged, or rushed through quieter legislative moments.
Supporters of privacy should celebrate this win today. Tomorrow, we will continue to keep watch.
New USPTO Memo Makes Fighting Patent Trolls Even Harder
The U.S. Patent and Trademark Office (USPTO) just made a move that will protect bad patents at the expense of everyone else. In a memo released February 28, the USPTO further restricted access to inter partes review, or IPR—the process Congress created to let the public challenge invalid patents without having to wage million-dollar court battles.
If left unchecked, this decision will shield bad patents from scrutiny, embolden patent trolls, and make it even easier for hedge funds and large corporations to weaponize weak patents against small businesses and developers.
IPR Exists Because the Patent Office Makes MistakesThe USPTO grants over 300,000 patents a year, but many of them should not have been issued in the first place. Patent examiners spend, on average, around 20 hours per patent, often missing key prior art or granting patents that are overly broad or vague. That’s how bogus patents on basic ideas—like podcasting, online shopping carts, or watching ads online—have ended up in court.
Congress created IPR in 2012 to fix this problem. IPR allows anyone to challenge a patent’s validity based on prior art, and it’s done before specialized judges at the USPTO, where experts can re-evaluate whether a patent was properly granted. It’s faster, cheaper, and often fairer than fighting it out in federal court.
The USPTO is Blocking Patent Challenges—AgainInstead of defending IPR, the USPTO is working to sabotage it. The February 28 memo reinstates a rule that allows for widespread use of “discretionary denials.” That’s when the Patent Trial and Appeal Board (PTAB) refuses to hear an IPR case for procedural reasons—even if the patent is likely invalid.
The February 28 memo reinstates widespread use of the Apple v. Fintiv rule, under which the USPTO often rejected IPR petitions whenever there’s an ongoing district court case about the same patent. This is backwards. If anything, an active lawsuit is proof that a patent’s validity needs to be reviewed—not an excuse to dodge the issue.
In 2022, former USPTO Director Kathi Vidal issued a memo making clear that the PTAB should hear patent challenges when “a petition presents compelling evidence of unpatentability,” even if there is parallel court litigation.
That 2022 guidance essentially saved the IPR system. Once PTAB judges were told to consider all petitions that showed “compelling evidence,” the procedural denials dropped to almost nothing. This February 28 memo signals that the USPTO will once again use discretionary denials to sharply limit access to IPR—effectively making patent challenges harder across the board.
Discretionary Denials Let Patent Trolls Rig the SystemThe top beneficiary of this decision will be patent trolls, shell companies formed expressly for the purpose of filing patent lawsuits. Often patent trolls seek to extract a quick settlement before a patent can be challenged. With IPR becoming increasingly unavailable, that will be easier than ever.
Patent owners know that discretionary denials will block IPRs if they file a lawsuit first. That’s why trolls flock to specific courts, like the Western District of Texas, where judges move cases quickly and rarely rule against patent owners.
By filing lawsuits in these troll-friendly courts, patent owners can game the system—forcing companies to pay up rather than risk millions in litigation costs.
The recent USPTO memo makes this problem even worse. Instead of stopping the abuse of discretionary denials, the USPTO is doubling down—undermining one of the most effective ways businesses, developers, and consumers can fight back against bad patents.
Congress Created IPR to Protect the Public—Not Just Patent OwnersThe USPTO doesn’t get to rewrite the law. Congress passed IPR to ensure that weak patents don’t become weapons for extortionary lawsuits. By reinforcing discretionary denials with minimal restrictions, and, as a result, blocking access to IPRs, the USPTO is directly undermining what Congress intended.
Leaders at the USPTO should immediately revoke the February 28 memo. If they refuse, as we pointed out the last time IPR denials spiraled out of control, it’s time for Congress to step in and fix this. They must ensure that IPR remains a fast, affordable way to challenge bad patents—not just a tool for the largest corporations. Patent quality matters—because when bad patents stand, we all pay the price.
How Do You Solve a Problem Like Google Search? Courts Must Enable Competition While Protecting Privacy.
Can we get from a world where Google is synonymous with search to a world where other search engines have a real chance to compete? The U.S. and state governments’ bipartisan antitrust suit, challenging the many ways that Google has maintained its search monopoly, offers an opportunity.
Antitrust enforcers have proposed a set of complementary remedies, from giving users a choice of search engine, to forcing Google to spin off Chrome and possibly Android into separate companies. Overall, this is the right approach. Google’s dominance in search is too entrenched to yield to a single fix. But there are real risks to users in the mix as well: Forced sharing of people’s sensitive search queries with competitors could seriously undermine user privacy, as could a breakup without adequate safeguards.
Let’s break it down.
The Antitrust Challenge to Google SearchThe Google Search antitrust suit began in 2020 under the first Trump administration, brought by the Department of Justice and 11 states. (Another 38 states filed a companion suit.) The heart of the suit was Google’s agreements with mobile phone makers, browser makers, and wireless carriers, requiring that Google Search be the default search engine, in return for revenue share payments including up to $20 billion per year that Google paid to Apple. A separate case, filed in 2023, challenged Google’s dominance in online advertising. Following a bench trial in summer 2023, Judge Amit Mehta of the D.C. federal court found Google’s search placement agreements to be illegal under the Sherman Antitrust Act, because they foreclosed competition in the markets for “general search” and “general search text advertising.”
The antitrust enforcers proposed a set of remedies in fall 2024, and filed a revised version this month, signalling that the new administration remains committed to the case. A hearing on remedies is scheduled for April.
The Obvious Fix: Ban Search Engine Exclusivity and Other Anticompetitive AgreementsThe first part of the government’s remedy proposal bans Google from making the kinds of agreements that led to this lawsuit: agreements to make Google the default search engine on a variety of platforms, agreements to pre-install Google Search products on a platform, and other agreements that would give platforms an incentive not to develop a general search engine of their own. This would mean the end of Google’s pay-for-placement agreements with Apple, Samsung, other hardware makers, and browser vendors like Mozilla.
In practice, a ban on search engine default agreements means presenting users with a screen that prompts them to choose a default search engine from among various competitors. Choice screens aren’t a perfect solution, because people tend to stick with what they know. Still, research shows that choice screens can have a positive impact on competition if they are implemented thoughtfully. The court, and the technical committee appointed to oversee Google’s compliance, should apply the lessons of this research.
It makes sense that the first step of a remedy for illegal conduct should be stopping that illegal conduct. But that’s not enough on its own. Many users choose Google Search, and will continue to choose it, because it works well enough and is familiar. Also, as the evidence in this case demonstrated, the walls that Google has built around its search monopoly have kept potential rivals from gaining enough scale to deliver the best results for uncommon search queries. So we’ll need more tools to fix the competition problem.
Safe Sharing: Syndication and Search IndexThe enforcers’ proposal also includes some measures that are meant to enable competitors to overcome the scale advantages that Google illegally obtained. One is requiring Google to let competitors use “syndicated” Google search results for 10 years, with no conditions or use restrictions other than “that Google may take reasonable steps to protect its brand, its reputation, and security.” Google would also have to share the results of “synthetic queries”—search terms generated by competitors to test Google’s results—and the “ranking signals” that underlie those queries. Many search engines, including DuckDuckGo, use syndicated search results from Microsoft’s Bing, and a few, like Startpage, receive syndicated results from Google. But Google currently limits re-ranking and mixing of those results—techniques that could allow competitors to offer real alternatives. Syndication is a powerful mechanism for allowing rivals the benefits of scale and size, giving them a chance to achieve a similar scale.
Importantly, syndication doesn’t reveal Google users’ queries or other personal information, so it is a privacy-conscious tool.
Similarly, the proposal orders Google to make its index – the snapshot of the web that forms the basis for its search results - available to competitors. This too is reasonably privacy-conscious, because it presumably includes only data from web pages that were already visible to the public.
Scary Sharing: Users’ “Click and Query” DataAnother data-sharing proposal is more complicated from a privacy perspective: requiring Google to provide qualified competitors with “user-side data,” including users’ search queries and data sets used to train Google's ranking algorithms. Those queries and data sets can include intensely personal details, including medical issues, political opinions and activities, and personal conflicts. Google is supposed to apply “security and privacy safeguards,” but it's not clear how this will be accomplished. An order that requires Google to share even part of this data with competitors raises the risk of data breaches, improper law enforcement access, commercial data mining and aggregation, and other serious privacy harms.
Some in the search industry, including privacy-conscious companies like DuckDuckGo, argue that filtering this “click and query” data to remove personally identifying information can adequately protect users’ privacy while still helping Google’s competitors generate more useful search results. For example, Google could share only queries that were used by some number of unique users. This is the approach Google already takes to sharing user data under the European Union’s Digital Markets Act, though Google sets a high threshold that eliminates about 97% of the data. Other rules that could apply are excluding strings of numbers that could be Social Security or other identification numbers, and other patterns of data that may be sensitive information.
But click and query data sharing still sets up a direct conflict between competition and privacy. Google, naturally, wants to share as little data as possible, while competitors will want more. It’s not clear to us that there’s an optimal point that both protects users’ privacy well and also meaningfully promotes competition. More research might reveal a better answer, but until then, this is a dangerous path, where pursuing the benefits of competition for users might become a race to the bottom for users’ privacy.
The Sledgehammer: Splitting off Chrome and Maybe AndroidThe most dramatic part of the enforcers’ proposal calls for an order to split off the Chrome browser as a separate company, and potentially also the Android operating system. This could be a powerful way to open up search competition. An independent Chrome and Android could provide many opportunities for users to choose alternative search engines, and potentially to integrate with AI-based information location tools and other new search competitors. A breakup would complement the ban on agreements for search engine exclusivity by applying the same ban to Chrome and Android as to iOS and other platforms.
The complication here is that a newly independent Chrome or Android might have an incentive to exploit users’ privacy in other ways. Given a period of exclusivity in which Google could not offer a competing browser or mobile operating system, Chrome and Android could adopt a business model of monetizing users’ personal data to an even greater extent than Google. To prevent this, a divestiture (breakup) order would also have to include privacy safeguards, to keep the millions of Chrome and Android users from facing an even worse privacy landscape than they do now.
The DOJ and states are pursuing a strong, comprehensive remedy for Google’s monopoly abuses in search, and we hope they will see that effort through to a remedies hearing and the inevitable appeals. We’re also happy to see that the antitrust enforcers are seeking to preserve users’ privacy. To achieve that goal, and keep internet users’ consumer welfare squarely in sight, they should proceed with caution on any user data sharing, and on breakups.
State AGs Must Act: EFF Expands Call to Investigate Crisis Pregnancy Centers
Back in January, EFF called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable crisis pregnancy centers (CPCs) that engage in deceptive practices. Since then, some of these centers have begun to change their websites, quietly removing misleading language and privacy claims; the Hawaii legislature is considering a bill calling on the attorney general to investigate CPCs in the state, and legislators in Georgia have introduced a slate of bills to tackle deceptive CPC practices.
But there is much more to do. Today, we’re expanding our call to attorneys general in Tennessee, Oklahoma, Nebraska, and North Carolina, urging them to investigate the centers in their states.
Many CPCs have been operating under a veil of misleading promises for years—suggesting that clients’ personal health data is protected under HIPAA, even though numerous reports suggest otherwise; that privacy policies are not followed consistently, and that clients' personal data may be shared across networks without appropriate consent. For example, in a case in Louisiana, we saw firsthand how a CPC inadvertently exposed personal data from multiple clients in a software training video. This kind of error not only violates individuals’ privacy but could also lead to emotional and psychological harm for individuals who trusted these centers with their sensitive information.
We list multiple examples from CPCs in each of the states that claim to comply with HIPAA in our letters to Attorneys General Hilgers, Jackson, Drummond, and Skrmetti. Those include:
- Gateway Women’s Care in North Carolina claims that “we hold your right to confidentiality with the utmost care and respect and comply with HIPAA privacy standards, which protect your personal and health information” in a blog post titled “Is My Visit Confidential?” Gateway Women’s Care received $56,514 in government grants in 2023.
- Assure Women’s Center in Nebraska stresses that it is “HIPAA compliant!” in a blog post that expressly urges people to visit them “before your doctor.”
As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices.
It’s time for state and federal leaders to reassess how public funds are allocated to these centers. Our elected officials are responsible for ensuring that personal information, especially our sensitive medical data, is protected. After all, no one should have to choose between their healthcare and their privacy.
EFF’s Reflections from RightsCon 2025
EFF was delighted to once again attend RightsCon—this year hosted in Taipei, Taiwan between 24-27 February. As with previous years, RightsCon provided an invaluable opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions.
For some attending from EFF, this was the first RightsCon. For others, their 10th or 11th. But for all, one message was spoken loud and clear: the need to collectivize digital rights in the face of growing authoritarian governments and leaders occupying positions of power around the globe, as well as Big Tech’s creation and provision of consumer technologies for use in rights-abusing ways.
EFF hosted a multitude of sessions, and appeared on many more panels—from a global perspective on platform accountability frameworks, to the perverse gears supporting transnational repression, and exploring tech tools for queer liberation online. Here we share some of our highlights.
Major Concerns Around Funding Cuts to Civil SocietyTwo major shifts affecting the digital rights space underlined the renewed need for solidarity and collective responses. First, the Trump administration’s summary (and largely illegal) funding cuts for the global digital rights movement from USAID, the State Department, the National Endowment for Democracy and other programs, are impacting many digital rights organizations across the globe and deeply harming the field. By some estimates, U.S. government cuts, along with major changes in the Netherlands and elsewhere, will result in a 30% reduction in the size of the global digital rights community, especially in global majority countries.
Second, the Trump administration’s announcement to respond to the regulation of U.S. tech companies with tariffs has thrown another wrench into the work of many of us working towards improved tech accountability.
We know that attacks on civil society, especially on funding, are a go-to strategy for authoritarian rulers, so this is deeply troubling. Even in more democratic settings, this reinforces the shrinking of civic space hindering our collective ability to organize and fight for better futures. Given the size of the cuts, it’s clear that other funders will struggle to counterbalance the dwindling U.S. public funding, but they must try. We urge other countries and regions, as well as individuals and a broader range of philanthropy, to step up to ensure that the crucial work defending human rights online will be able to continue.
Community Solidarity with Alaa Abd El-Fattah and Laila SoueifThe call to free Alaa Abd El-Fattah from illegal detention in Egypt was a prominent message heard throughout RightsCon. During the opening ceremony, Access Now’s new Executive Director, Alejandro Mayoral, talked about Alaa’s keynote speech at the very first RightsCon and stated: “We stand in solidarity with him and all civil society actors, activists, and journalists whose governments are silencing them.” The opening ceremony also included a video address from Alaa’s mother, Laila Soueif, in which she urged viewers to “not let our defeat be permanent.” Sadly, immediately after that address Ms. Soueif was admitted to the hospital as a result of her longstanding hunger strike in support of her son.
The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at UK-based digital rights group Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”
EFF and Open Rights Group also hosted a session talking about Alaa, his work as a blogger, coder, and activist for more than two decades. The session included a reading from Alaa’s book and a discussion with participants on strategies.
Platform Accountability in CrisisOnline platforms like Facebook and services like Google are crucial spaces for civic discourse and access to information. Many sessions at RightsCon were dedicated to the growing concern that these platforms have also become powerful tools for political manipulation, censorship, and control. With the return of the Trump administration, Facebook’s shift in hate speech policies, and the growing geo-politicization of digital governance, many now consider platform accountability being in crisis.
A dedicated “Day 0” event, co-organized by Access Now and EFF, set the stage of these discussions with a high-level panel reflecting on alarming developments in platform content policies and enforcement. Reflecting on Access Now’s “rule of law checklist,” speakers stressed how a small group of powerful individuals increasingly dictate how platforms operate, raising concerns about democratic resilience and accountability. They also highlighted the need for deeper collaboration with global majority countries on digital governance, taking into account diverse regional challenges. Beyond regulation, the conversation discussed the potential of user-empowered alternatives, such as decentralized services, to counter platform dominance and offer more sustainable governance models.
A key point of attention was the EU’s Digital Services Act (DSA), a rulebook with the potential to shape global responses to platform accountability but one that also leaves many crucial questions open. The conversation naturally transitioned to the workshop organized by the DSA Human Rights Alliance, which focused more specifically on the global implications of DSA enforcement and how principles for a “Human Rights-Centered Application of the DSA” could foster public interest and collaboration.
Fighting Internet Shutdowns and Anti-Censorship ToolsMany sessions discussed internet shutdowns and other forms of internet blocking impacted the daily lives of people under extremely oppressive regimes. The overwhelming conclusion was that we need encryption to remain strong in countries with better conditions of democracy in order to continue to bridge access to services in places where democracy is weak. Breaking encryption or blocking important tools for “national security,” elections, exams, protests, or for law enforcement only endangers freedom of information for those with less political power. In turn, these actions empower governments to take possibly inhumane actions while the “lights are out” and people can’t tell the rest of the world what is happening to them.
Another pertinent point coming out of RightsCon was that anti-censorship tools work best when everyone is using them. Diversity of users not only helps to create bridges for others who can’t access the internet through normal means, but it also helps to create traffic that looks innocuous enough to bypass censorship blockers. Discussions highlighted how the more tools we have to connect people without unique traffic, the less chances there are for government censorship technology to keep their traffic from going through. We know some governments are not above completely shutting down internet access. But in cases where they still allow the internet, user diversity is key. It also helps to move away from narratives that imply “only criminals” use encryption. Encryption is for everyone, and everyone should use it. Because tomorrow’s internet could be tested by future threats.
At this years RightsCon, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by the Israeli authorities. Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services—preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy.
On another panel, EFF raised concerns to Microsoft representatives about an AP report that emerged just prior to Rightscon about the company providing services to the Israeli Defense Forces that are being used as part of the repression of Palestinians in Gaza as well as in the bombings in Lebanon. We noted that Microsoft’s pledges to support human rights seemed to be in conflict with this, something EFF has already raised about Google and Amazon and their work on Project Nimbus. Microsoft promised to look into that allegation, as well as one about its provision of services to Saudi Arabia.
In the RightsCon opening ceremony, Alejandro Mayoral noted that: “Today, the world’s eyes are on Gaza, where genocide has taken place, AI is being weaponized, and people’s voices are silenced as the first phase of the fragile Palestinian-Israeli ceasefire is realized.” He followed up by saying, “We are surrounded by conflict. Palestine, Sudan, Myanmar, Ukraine, and beyond…where the internet and technology are being used and abused at the cost of human lives.” Following this keynote, Access Now’s MENA Policy and Advocacy Director, Marwa Fatafta, hosted a roundtable to discuss technology in times of conflict, where takeaways included the reminder that “there is no greater microcosm of the world’s digital rights violations happening in our world today than in Gaza. It’s a laboratory where the most invasive and deadly technologies are being tested and deployed on a besieged population.”
Countering Cross-Border Arbitrary Surveillance and Transnational RepressionConcerns about ongoing legal instruments that can be misused to expand transnational repression were also front-and-center at RightsCon. During a Citizen Lab-hosted session we participated in, participants examined how cross-border policing can become a tool to criminalize marginalized groups, the economic incentives driving these criminalization trends, and the urgent need for robust, concrete, and enforceable international human rights safeguards. They also noted that the newly approved UN Cybercrime Convention, with only minimal protections, adds yet another mechanism for broadening cross-border surveillance powers, thereby compounding the proliferation of legal frameworks that lack adequate guardrails against misuse.
Age-Gating the InternetEFF co-hosted a roundtable session to workshop a human rights statement addressing government mandates to restrict young people’s access to online services and specific legal online speech. Participants in the roundtable represented five continents and included representatives from civil society and academia, some of whom focused on digital rights and some on childrens’ rights. Many of the participants will continue to refine the statement in the coming months.
Hard ConversationsEFF participated in a cybersecurity conversation with representatives of the UK government, where we raised serious concerns about the government’s hostility to strong encryption, and the resulting insecurity they had created for both UK citizens and the people who communicate with them by pressuring Apple to ensure UK law enforcement access to all communications.
Equity and Inclusion in Platform Discussions, Policies, and Trust & SafetyThe platform economy is an evergreen RightsCon topic, and this year was no different, with conversations ranging from the impact of content moderation on free expression to transparency in monetization policies, and much in between. Given the recent developments at Meta, X, and elsewhere, many participants were rightfully eager to engage.
EFF co-organized an informal meetup of global content moderation experts with whom we regularly convene, and participated in a number of sessions, such as on the decline of user agency on platforms in the face of growing centralized services, as well as ways to expand choice through decentralized services and platforms. One notable session on this topic was hosted by the Center for Democracy and Technology on addressing global inequities in content moderation, in which speakers presented findings from their research on the moderation by various platforms of content in Maghrebi Arabic and Kiswahili, as well as a forthcoming paper on Quechua.
Reflections and Next StepsRightsCon is a conference that reminds us of the size and scope of the digital rights movement around the world. Holding it in Taiwan and in the wake of the huge cuts to funding for so many created an urgency that was palpable across the spectrum of sessions and events. We know that we’ve built a robust community and that can weather the storms, and in the face of overwhelming pressure from government and corporate actors, it's essential that we resist the temptation to isolate in the face of threats and challenges but instead continue to push forward with collectivisation and collaboration to continue speaking truth to power, from the U.S. to Germany, and across the globe.
California’s A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly
California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.
A Burden That Small Developers Can’t BearThe AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche.
This bill demands that creators of any AI model–even a two-person company or a hobbyist tinkering with a small software build– identify copyrighted materials used in training. That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best–neither machine-readable nor accessible, it’s more like a card catalog than a database–that doesn’t offer information sufficient to identify all authors of a work, much less help developers to reliably match works in a training set to works in the system.
Even for major tech companies, meeting these new obligations would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.
A.I. Training Is Like Reading And It’s Very Likely Fair UseA.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use.
It’s now bedrock internet law principle that some forms of copying content online are transformative, and thus legal fair use. That includes reproducing thumbnail images for image search, or snippets of text to search books.
The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.
Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux.
A.B. 412 Is A Gift to Big TechThe irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.
If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research–and effectively, pay off–copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation.
If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website.
EFF Joins 7amleh Campaign to #ReconnectGaza
In times of conflict, the internet becomes more than just a tool—it is a lifeline, connecting those caught in chaos with the outside world. It carries voices that might otherwise be silenced, bearing witness to suffering and survival. Without internet access, communities become isolated, and the flow of critical information is disrupted, making an already dire situation even worse.
At this years RightsCon conference hosted in Taiwan, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right.
The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by the Israeli authorities.Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services— preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. In response, there is an urgent and immediate need to deploy emergency solutions, such as eSIM cards, satellite internet access, and mobile communications hubs.
At the same time, there is an opportunity to rebuild towards a just and permanent solution with modern technologies that would enable reliable, high-speed connectivity that supports education, healthcare, and economic growth. The campaign calls for this as a paramount component to reconnecting Gaza, whilst also ensuring the safety and protection of telecommunications workers on the ground, who risk their lives to repair and maintain critical infrastructure.
Further, beyond responding to these immediate needs, 7amleh and the #ReconnectGaza campaign demands the establishment of an independent Palestinian ICT sector, free from external control, as a cornerstone of Gaza’s reconstruction and Palestine's digital sovereignty. Palestinians have been subject to Israel internet controls since the Oslo Accords, which settled that Palestine should have its own telephone, radio, and TV networks, but handed over details to a joint technical committee. Ending the deliberate isolation of the Palestinian people is critical to protecting fundamental human rights.
This is not the first time internet shutdowns have been weaponized as a tool for oppression. In 2012, Palestinians in Gaza were subject to frequent power outages and were forced to rely on generators and insecure dial-up connections for connectivity. More recently since October 7, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world.
Access to internet infrastructure is essential—it enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. It is imperative that people’s access to the internet remains protected.
The restoration of telecommunications in Gaza is deemed an urgent humanitarian need. Global stakeholders, including UN agencies, governments, and telecommunications companies, must act swiftly to ensure the restoration and modernization of Gaza’s telecommunications.
The Foilies 2025
Co-written by MuckRock's Michael Morisy, Dillon Bergin, and Kelly Kauffman
The public's right to access government information is constantly under siege across the United States, from both sides of the political aisle. In Maryland, where Democrats hold majorities, the attorney general and state legislature are pushing a bill to allow agencies to reject public records requests that they consider "harassing." At the same time, President Donald Trump's administration has moved its most aggressive government reform effort–the Department of Government Efficiency, or DOGE–outside the reach of the Freedom of Information Act (FOIA), while also beginning the mass removal of public data sets.
One of the most powerful tools to fight back against bad governance is public ridicule. That's where we come in: Every year during Sunshine Week (March 16-22). the Electronic Frontier Foundation, MuckRock and AAN Publishers team up to publish The Foilies. This annual report—now a decade old—names and shames the most repugnant, absurd, and incompetent responses to public records requests under FOIA and state transparency laws.
Sometimes the good guys win. For example, last year we highlighted the Los Angeles Police Department for using the courts to retaliate against advocates and a journalist who had rightfully received and published official photographs of police officers. The happy ending (at least for transparency): LAPD has since lost the case, and the city paid the advocates $300,000 to cover their legal bills.
Here are this year's "winners." While they may not all pay up, at least we can make sure they get the negative publicity they're owed.
The Exorbitant FOIA Fee of the Year: Rapides Parish School DistrictAfter a church distributed a religious tract at Lessie Moore Elementary School School in Pineville, La., young students quickly dubbed its frank discussion of mature themes as “the sex book.” Hirsh M. Joshi from the Freedom From Religion Foundation, a lawyer representing a parent, filed a request with the Rapides Parish School District to try to get some basic information: How much did the school coordinate with the church distributing the material? Did other parents complain? What was the internal reaction? Joshi was stunned when the school district responded with an initial estimate of $2 million to cover the cost of processing the request. After local media picked up the story and a bit of negotiating, the school ultimately waived the charges and responded with a mere nine pages of responsive material.
While Rapides Parish’s sky-high estimate ultimately took home the gold this year, there was fierce competition. The Massachusetts State Police wanted $176,431 just to review—and potentially not even release—materials about recruits wholeave the state’s training program early. Back in Louisiana, the Jefferson Parish District Attorney’s office insisted on charging a grieving father more than $5,000 for records on the suspicious death of his own son.
The Now You See It, Now You Don’t Award: University of Wisconsin-MadisonSports reporter Daniel Libit’s public records request is at the heart of a lawsuit that looks a lot like the Spider-Man pointing meme. In 2023, Libit filed the request for a contract between the University of Wisconsin and Altius Sports Partners, a firm that consults college athletic programs on payment strategies for college athletes ("Name, Image, Likeness" or NIL deals), after reading a university press release about the partnership.The university denied the request, claiming that Altius was actually contracted by the University of Wisconsin Foundation, a separate 501(c)(3). So, Libit asked the foundation for the contract. The foundation then denied the request, claiming it was exempt from Wisconsin’s open records laws. After the denial, Libit filed a lawsuit for the records, which was then dismissed, because the university and foundation argued that Libit had incorrectly asked for a contract between the university and Altius, as opposed to the foundation and Altius.
The foundation did produce a copy of the contract in the lawsuit, but the game of hiding the ball makes one thing clear, as Libit wrote after: “If it requires this kind of effort to get a relatively prosaic NIL consultant contract, imagine the lengths schools are willing to go to keep the really interesting stuff hidden.”
The Fudged Up Beyond All Recognition Award: Central Intelligence AgencyA CIA official's grandma's fudge recipe was too secret for public consumption.
There are state secrets, and there are family secrets, and sometimes they mix … like a creamy, gooey confectionary.
After Mike Pompeo finished his first year as Trump's CIA director in 2017, investigative reporter Jason Leopold sent a FOIA request asking for all of the memos Pompeo sent to staff. Seven years later, the agency finally produced the records, including a "Merry Christmas and Happy New Year" message recounting the annual holiday reception and gingerbread competition, which was won by a Game of Thrones-themed entry. ("And good use of ice cream cones!" Pompeo wrote.) At the party, Pompeo handed out cards with his mom's "secret" recipe for fudge, and for those who couldn't make it, he also sent it out as an email attachment.
But the CIA redacted the whole thing, vaguely claiming it was protected from disclosure under federal law. This isn't the first time the federal government has protected Pompeo's culinary secrets: In 2021, the State Department redacted Pompeo's pizza toppings and favorite sandwich from emails.
The You Can't Handle the Truth Award: Virginia Gov. Glenn YoungkinIn Virginia, state officials have come under fire in the past few years for shielding records from the public under the broad use of a “working papers and correspondence” FOIA exemption. When a public records request came in for internal communications on the state’s Military Survivors and Dependents Education Program, which provides tuition-free college to spouses and children of military veterans killed or disabled as a result of their service, Gov. Glenn Youngkin’s office used this “working papers” exemption to reject the FOIA request.
The twist is the request was made by Kayla Owen, a military spouse and a member of the governor’s own task force studying the program. Despite Owen’s attempts to correct the parameters of the request, Youngkin’s office made the final decision in July to withhold more thantwo folders worth of communications with officials who have been involved with policy discussions about the program.
The Courts Cloaked in Secrecy Award (Tie): Solano County Superior Court, Calif., and Washoe County District Court, Nev.Courts are usually the last place the public can go to vindicate their rights to government records when agencies flout them. When agencies lock down records, courts usually provide the key to open them up.
Except in Vallejo, Calif., where a state trial court judge decided to lock his own courtroom during a public records lawsuit—a move that even Franz Kafka would have dismissed as too surreal and ironic. The suit filed by the American Civil Liberties Union sought a report detailing a disturbing ritual in which officers bent their badges to celebrate their on-duty killings of local residents.
When public access advocates filed an emergency motion to protest the court closure, the court denied it without even letting them in to argue their case. This was not just a bad look; it violated the California and U.S. constitutions, which guarantee public access to court proceedings and a public hearing prior to barring the courtroom doors.
Not to be outdone, a Nevada trial court judge has twice barred a local group from filming hearings concerning a public records lawsuit. The request sought records of an alleged domestic violence incident at the Reno city manager’s house. Despite the Nevada Supreme Court rebuking the judge for prohibiting cameras in her courtroom, she later denied the same group from filming another hearing. The transparency group continues to fight for camera access, but its persistence should not be necessary: The court should have let them record from the get-go.
NSA claimed it didn't have the obsolete tech to access lecture by military computing pioneer Grace Hopper
In 1982, Rear Adm. Grace Hopper (then a captain) presented a lecture to the National Security Agency entitled “Future Possibilities: Data, Hardware, Software, and People.” One can only imagine Hopper's disappointment if she had lived long enough to learn that in the future, the NSA would claim it was impossible for its people to access the recording of the talk.
Hopper is undoubtedly a major figure in the history of computing whose records and lectures are of undeniable historical value, and Michael Ravnitzky, frequent FOIA requester and founder of Government Attic, requested this particular lecture back in 2021. Three years later, the NSA responded to tell him that they had no responsive documents.
Befuddled, Ravnitzky pointed out the lecture had been listed in the NSA’s own Television Center Catalogue. At that point, the agency copped to the actual issue. Yes, it had the record, but it was captured on AMPEX 1-inch open reel tapes, as was more common in the 1980s. Despite being a major intelligence agency with high-tech surveillance and communication capabilities, it claimed it could not find any way to access the recording.
Let’s unpack the multi-layered egregiousness of the NSA’s actions here. It took the agency three years to respond to this FOIA. When it did, the NSA claimed that it had nothing responsive, which was a lie. But the most colossal failure by the NSA was its claim that it couldn’t find a way to make accessible to the public important moments from our history because of technical difficulties.
But leave it to librarians to put spies to shame: The National Archives stepped in to help, and now you can watch the lecture in two parts.
Can't get enough of The Foilies? Check out our decade in review and our archives!
“Guardrails” Won’t Protect Nashville Residents From AI-Enabled Camera Networks
Nashville’s Metropolitan Council is one vote away from passing an ordinance that’s being branded as “guardrails” against the privacy problems that come with giving the police a connected camera system like Axon’s Fusus. But Nashville locals are right to be skeptical of just how much protection from mass surveillance products they can expect.
"I am against these guardrails," council member Ginny Welsch told the Tennessean recently. "I think they're kind of a farce. I don't think there can be any guardrail when we are giving up our privacy and putting in a surveillance system."
Likewise, Electronic Frontier Alliance member Lucy Parsons Labs has inveighed against Fusus and the supposed guardrails as a fix to legislators’ and residents’ concerns in a letter to the Metropolitan Council.
While the ordinance doesn’t name the company specifically, it was introduced in response to privacy concerns over the city’s possible contract for Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. In particular, local opponents are concerned about data-sharing—a critical part of Fusus—that could impede the city’s ability to uphold its values against the criminalization of some residents, like undocumented immigrants and people seeking reproductive or gender-affirming care.
This technology product, which was acquired by the police surveillance giant Axon in 2024, facilitates two major functions for police:
- With the click of a buttonx—or the tap of an icon on a map—officers can get access to live camera footage from public and private cameras, including the police’s Axon body-worn cameras, that have been integrated into the Fusus network.
- Data feeds from a variety of surveillance tools—like body-worn cameras, drones, gunshot detection, and the connected camera network—can be aggregated into a system that makes those streams quickly accessible and susceptible to further analysis by features marketed as “artificial intelligence.”
From 2022 through 2023, Metropolitan Nashville Police Department (MNPD) had, unbeknownst to the public, already been using Fusus. When the contract came back under consideration, a public outcry and unanswered questions about the system led to its suspension, and the issue was deferred multiple times before the contract renewal was voted down late last year. Nashville council members determined that the Fusus system posed too great a threat to vulnerable groups that the council has sought to protect with city policies and resolutions, including pregnant residents, immigrants, and residents seeking gender-affirming care, among others. The state has criminalized some of the populations that the city of Nashville has passed ordinances to protect.
Unfortunately, the fight against the sprawling surveillance of Fusus continues. The city council is now making its final consideration of the aforementionedan ordinance that some of its members say will protect city residents in the event that the mayor and other Fusus fans are able to get a contract signed after all.
These so-called guardrails include:
- restricting the MNPD from accessing private cameras or installing public safety cameras in locations “where there is a reasonable expectation of privacy”;
- prohibiting using face recognition to identify individuals in the connected camera system’s footage;
- policies addressing authorized access to and use of the connected camera system, including how officers will be trained, and how they will be disciplined for any violations of the policy;
- quarterly audits of access to the connected camera system;
- mandatory inclusion of a clause in procurement contracts allowing for immediate termination should violations of the ordinance be identified;
- mandatory reporting to the mayor and the council about any violations of the ordinance, the policies, or other abuse of access to the camera network within seven days of the discovery.
Here’s the thing: even if these limited “guardrails” are in place, the only true protection from the improper use of the AI-enabled Fusus system is to not use it at all.
We’ve seen that when law enforcement has access to cameras, they will use them, even if there are clear regulations prohibiting those uses:
- During protests against police brutality in San Francisco, police used live access to cameras to illegally spy on protestors.
- Black residents of a subsidized housing development became the primary surveillance targets for police officers with Fusus access in Toledo, Ohio.
- Officers in Massachusetts have been able to use cameras with live access to conduct months-long, ongoing warrantless surveillance.
Firms such as Fusus and its parent company Axon are pushing AI-driven features, and databases with interjurisdictional access. Surveillance technology is bending toward a future where all of our data are being captured, including our movements by street cameras (like those that would be added to Fusus), our driving patterns by ALPR, our living habits by apps, and our actions online by web trackers, and then being combined, sold, and shared.
When Nashville first started its relationship with Fusus in 2022, the company featured only a few products, primarily focused on standardizing video feeds from different camera providers.
Now, Fusus is aggressively leaning into artificial intelligence, claiming that its “AI on the Edge” feature is built into the initial capture phase and processes as soon as video is taken. Even if the city bans use of face recognition for the connected camera system, the Fusus system boasts that it can detect humans, objects, and combine other characteristics to identify individuals, detect movements, and set notifications based on certain characteristics and behaviors. Marketing material claims that the system comes “pre-loaded with dozens of search and analysis variables and profiles that are ready for action,” including a "robust & growing AI library.” It’s unclear how these AI recognition options are generated or how they are vetted, if at all, or whether they can even be removed as would be required by the ordinance.
The proposed “guardrails” in Nashville are insufficient to address danger posed by mass surveillance systems, and the city of Nashville shouldn’t think they’ve protected their residents, tourists, and other visitors by passing them. Nashville residents and other advocacy groups have already raised concerns.
The only true way to protect Nashville’s residents against dragnet surveillance and overcriminalization is to block access to these invasive technologies altogether. Though this ordinance has passed its second reading, Nashville should not adopt Fusus or any other connected camera system, regardless of whether the ordinance is ultimately adopted. If Councilors care about protecting their constituents, they should hold the line against Fusus.
EFF to NSF: AI Action Plan Must Put People First
This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).
EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.
Government Automated Decision MakingUS procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.
These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.
Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.
Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.
Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.
Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.
For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.
Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.
This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.
People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.
You can read our comments here.
EFF Thanks Fastly for Donated Tools to Help Keep Our Website Secure
EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online.
Eff.org was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown.
It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily.
EFF is member-supported by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.
EFFecting Change: Is There Hope for Social Media?
Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech.
EFFecting Change Livestream Series:Is There Hope for Social Media?
Thursday, March 20th
12:00 PM - 1:00 PM Pacific - Check Local Time
This event is LIVE and FREE!
Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away?
Join our panel featuring EFF Civil Liberties Director David Greene, EFF Director for International Freedom of Expression Jillian York, Mastodon's Felix Hlatky, Bluesky's Emily Liu, and Spill's Kenya Parham as they explore the future of free expression online and why social media might still be worth saving.
We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.
Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.
EFF Joins AllOut’s Campaign Calling for Meta to Stop Hate Speech Against LGBTQ+ Community
In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:
People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech.
The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity.
In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to demand that Meta immediately reverse the policy changes. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads.
The campaign is supported by the following partners: All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania.
Sign the AllOut petition (external link) and tell Meta: Stop hate speech against LGBT+ people!
If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.
In Memoriam: Mark Klein, AT&T Whistleblower Who Revealed NSA Mass Spying
EFF is deeply saddened to learn of the passing of Mark Klein, a bona fide hero who risked civil liability and criminal prosecution to help expose a massive spying program that violated the rights of millions of Americans.
Mark didn’t set out to change the world. For 22 years, he was a telecommunications technician for AT&T, most of that in San Francisco. But he always had a strong sense of right and wrong and a commitment to privacy.
Mark not only saw how it works, he had the documents to prove it.
When the New York Times reported in late 2005 that the NSA was engaging in spying inside the U.S., Mark realized that he had witnessed how it was happening. He also realized that the President was not telling Americans the truth about the program. And, though newly retired, he knew that he had to do something. He showed up at EFF’s front door in early 2006 with a simple question: “Do you folks care about privacy?”
We did. And what Mark told us changed everything. Through his work, Mark had learned that the National Security Agency (NSA) had installed a secret, secure room at AT&T’s central office in San Francisco, called Room 641A. Mark was assigned to connect circuits carrying Internet data to optical “splitters” that sat just outside of the secret NSA room but were hardwired into it. Those splitters—as well as similar ones in cities around the U.S.—made a copy of all data going through those circuits and delivered it into the secret room.
A photo of the NSA-controlled 'secret room' in the AT&T facility in San Francisco (Credit: Mark Klein)
Mark not only saw how it works, he had the documents to prove it. He brought us over a hundred pages of authenticated AT&T schematic diagrams and tables. Mark also shared this information with major media outlets, numerous Congressional staffers, and at least two senators personally. One, Senator Chris Dodd, took the floor of the Senate to acknowledge Mark as the great American hero he was.
We used Mark’s evidence to bring two lawsuits against the NSA spying that he uncovered. The first was Hepting v. AT&T and the second was Jewel v. NSA. Mark also came with us to Washington D.C. to push for an end to the spying and demand accountability for it happening in secret for so many years. He wrote an account of his experience called Wiring Up the Big Brother Machine . . . And Fighting It.
Archival EFF graphic promoting Mark Klein's DC tour
Mark stood up and told the truth at great personal risk to himself and his family. AT&T threatened to sue him, although it wisely decided not to do so. While we were able to use his evidence to make some change, both EFF and Mark were ultimately let down by Congress and the Courts, which have refused to take the steps necessary to end the mass spying even after Edward Snowden provided even more evidence of it in 2013.
But Mark certainly inspired all of us at EFF, and he helped inspire and inform hundreds of thousands of ordinary Americans to demand an end to illegal mass surveillance. While we have not yet seen the success in ending the spying that we all have hoped for, his bravery helped to usher numerous reforms so far.
And the fight is not over. The law, called Section 702, that now authorizes the continued surveillance that Mark first revealed, expires in early 2026. EFF and others will continue to push for continued reforms and, ultimately, for the illegal spying to end entirely.
Mark’s legacy lives on in our continuing fights to reform surveillance and honor the Fourth Amendment’s promise of protecting personal privacy. We are forever grateful to him for having the courage to stand up and will do our best to honor that legacy by continuing the fight.
EFF Stands with Perkins Coie and the Rule of Law
As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, Electronic Frontier Foundation unequivocally supports Perkins Coie’s challenge to the Trump administration’s shocking, vindictive, and unconstitutional Executive Order. In punishing the law firm for its zealous advocacy on behalf of its clients, the order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie (and its legal representatives) for fighting back.
Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803).
The Executive Order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs.
This is a fundamental abuse of executive power.
The constitutional problems are legion, but here are a few:
- The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545.
- The Executive Order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Nieves v. Bartlett, 587 U.S. 391, 398 (2019).
- And the Executive Order functions as an illegal loyalty oath for the entire legal profession, conditioning access to federal courthouses or client relationships with government contractors on fealty to the executive branch, including forswearing protected speech in opposition to it. That condition is blatantly unlawful: The government cannot require that those it works with or hires embrace certain political beliefs or promise that they have “not engaged, or will not engage, in protected speech activities such as … criticizing institutions of government.” See Cole v. Richardson, 405 U.S. 676, 680 (1972).
Civil liberties advocates such as EFF rely on the rule of law and access to the courts to vindicate their clients’, and the public’s, fundamental rights. From this vantage point, we can see that this Executive Order is nothing less than an attack on the foundational principles of American democracy.
The Executive Order must be swiftly nullified by the court and uniformly vilified by the entire legal profession.
Click here for the number to listen in on a hearing on a temporary restraining order, scheduled for 2pmET/11amPT Wednesday, March 12.
Anchorage Police Department: AI-Generated Police Reports Don’t Save Time
The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees.
The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report. The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves. “So if they saw something but didn’t say it, of course, the body cam isn’t going to know that,” the deputy chief continued.
The Anchorage Police Department is not alone in claiming that Draft One is not a time saving device for officers. A new study into police using AI to write police reports, which specifically tested Axon’s Draft One, found that AI-assisted report-writing offered no real time-savings advantage.
This news comes on the heels of policymakers and prosecutors casting doubt on the utility or accuracy of AI-created police reports. In Utah, a pending state bill seeks to make it mandatory for departments to disclose when reports have been written by AI. In King County, Washington, the Prosecuting Attorney’s Office has directed officers not to use any AI tools to write narrative reports.
In an era where companies that sell technology to police departments profit handsomely and have marketing teams to match, it can seem like there is an endless stream of press releases and local news stories about police acquiring some new and supposedly revolutionary piece of tech. But what we don’t usually get to see is how many times departments decide that technology is costly, flawed, or lacks utility. As the future of AI-generated police reports rightly remains hotly contested, it’s important to pierce the veil of corporate propaganda and see when and if police departments actually find these costly bits of tech useless or impractical.
Hawaii Takes a Stand for Privacy: HCR 144/HR 138 Calls for Investigation of Crisis Pregnancy Centers
In a bold push for medical privacy, Hawaii's House of Representatives has introduced HCR 144/HR 138, a resolution calling for the Hawaii Attorney General to investigate whether crisis pregnancy centers (CPCs) are violating patient privacy laws.
Often referred to as "fake clinics" or “unregulated pregnancy centers” (UPCs), these are non-medical centers that provide free pregnancy tests and counseling, but typically do not offer essential reproductive care like abortion or contraception. In Hawaii, these centers outnumber actual clinics offering abortion and reproductive healthcare. In fact, the first CPC in the United States was opened in Hawaii in 1967 by Robert Pearson, who then founded the Pearson Foundation, a St. Louis-based organization to assist local groups in setting up unregulated crisis pregnancy centers.
EFF has called on state AGs to investigate CPCs across the country. In particular, we are concerned that many centers have misrepresented their privacy practices, including suggesting that patient information is protected by HIPAA when it may not be. In January, EFF contacted attorneys general in Florida, Texas, Arkansas, and Missouri asking them to identify and hold accountable CPCs that engage in deceptive practices.
Rep. Kapela’s resolution specifically references EFF’s call on state Attorneys General. It reads:
“WHEREAS, the Electronic Frontiers Foundation, an international digital rights nonprofit that promotes internet civil liberties, has called on states to investigate whether crisis pregnancy centers are complying with patient privacy regulations with regard to the retention and use of collected patient data.”
HCR 144/HR 138 underscores the need to ensure that healthcare providers handle personal data, particularly medical data, securely and transparently.. Along with EFF’s letters to state AGs, the resolution refers to the increasing body of research on the topic, such as:
- A 2024 Healthcare Management Associates Study showed that CPCs received $400 million in federal funding between 2017 and 2023, with little oversight from regulators.
- A Health Affairs article from November 2024 titled "Addressing the HIPAA Blind Spot for Crisis Pregnancy Centers" noted that crisis pregnancy centers often invoke the Health Insurance Portability and Accountability Act (HIPAA) to collect personal information from clients.
Regardless of one's stance on reproductive healthcare, there is one principle that should be universally accepted: the right to privacy. As HCR 144/HR 138 moves forward, it is imperative that Hawaii's Attorney General investigate whether CPCs are complying with privacy regulations and take action, if necessary, to protect the privacy rights of individuals seeking reproductive healthcare in Hawaii.
Without comprehensive privacy laws that offer individuals a private right of action, state authorities must be the front line in safeguarding the privacy of their constituents. As we continue to advocate for stronger privacy protections nationwide, we encourage lawmakers and advocates in other states to follow Hawaii's lead and take action to protect the medical privacy rights of all of their constituents.
Ten Years of The Foilies
In the year 2015, we witnessed the launch of OpenAI, a debate over the color of a dress going viral, and a Supreme Court decision that same-sex couples have the right to get married. It was also the year that the Electronic Frontier Foundation (EFF) first published The Foilies, an annual report that hands out tongue-in-cheek "awards" to government agencies and officials that respond outrageously when a member of the public tries to access public records through the Freedom of Information Act (FOIA) or similar laws.
A lot has changed over the last decade, but one thing that hasn't is the steady flow of attempts by authorities to avoid their legal and ethical obligations to be open and accountable. Sometimes, these cases are intentional, but just as often, they are due to incompetence or straight-up half-assedness.
Over the years, EFF has teamed up with MuckRock to document and ridicule these FOIA fails and transparency trip-ups. And through a partnership with AAN Publishers, we have named-and-shamed the culprits in weekly newspapers and on indie news sites across the United States in celebration of Sunshine Week, an annual event raising awareness of the role access to public records plays in a democracy.
This year, we reflect on the most absurd and frustrating winners from the last 10 years as we prepare for the next decade, which may even be more terrible for government transparency.
Assessing huge fee estimates is one way agencies discourage FOIA requesters.
Under FOIA, federal agencies are able to charge "reasonable" fees for producing copies of records. But sometimes agencies fabricate enormous price tags to pressure the requester to drop the query.
In 2015, Martin Peck asked the U.S. Department of Defense (DOD) to disclose the number of "HotPlug” devices (tools used to preserve data on seized computers) it had purchased. The DOD said it would cost $660 million and 15 million labor hours (over 1,712 years), because its document system wasn't searchable by keyword, and staff would have to comb through 30 million contracts by hand.
Runners-up:City of Seattle (2019 Winner): City officials quoted a member of the public $33 million for metadata for every email sent in 2017, but ultimately reduced the fee to $40.
Rochester (Michigan) Community Schools District (2023 Winner): A group of parents critical of the district's remote-learning plan requested records to see if the district was spying on their social media. One parent was told they would have to cough up $18,641,345 for the records, because the district would have to sift through every email.
Willacy County (Texas) Sheriff's Office (2016 Winner): When the Houston Chronicle asked for crime data, the sheriff sent them an itemized invoice that included $98.40 worth of Wite-Out–the equivalent of 55 bottles–to redact 1,016 pages of records.
The Most Ridiculous Redaction: Federal Bureau of Investigation (2015 Winner)Ain't no party like a REDACTED FBI party!
Brad Heath, who in 2014 was a reporter at USA Today, got a tip that a shady figure had possibly attended an FBI retirement party. So he filed a request for the guest list and pictures taken at the event. In response, the FBI sent a series of surreal photos of the attendees, hugging, toasting, and posing awkwardly, but all with polygonal redactions covering their faces like some sort of mutant, Minecraft family reunion.
Runner-UpU.S. Southern Command (2023 Winner): Investigative journalist Jason Leopold obtained scans of paintings by detainees at Guantanamo Bay, which were heavily redacted under the claim that the art would disclose law enforcement information that could "reasonably be expected to risk circumvention of the law."
WBRZ Reporter Chris Nakamoto was cuffed for trying to obtain records in White Castle, Louisiana. Credit: WBRZ-TV
Chris Nakamoto, at the time a reporter for WBRZ, filed a public records request to probe the White Castle mayor's salary. But when he went down to check on some of the missing records, he was handcuffed, placed in a holding cell, and charged with the crime of "remaining after being forbidden.” He was summoned to appear before the "Mayor's Court" in a judicial proceeding presided over by none other than the same mayor he was investigating. The charges were dropped two months later.
Runners-upJack White (2015 Winner): One of the rare non-government Foilies winners, the White Stripes guitarist verbally abused University of Oklahoma student journalists and announced he wouldn't play at the school anymore. The reason? The student newspaper, OU Daily, obtained and published White's contract for a campus performance, which included his no-longer-secret guacamole recipe, a bowl of which was demanded in his rider.
Richlands, Virginia (2024 Winner): Resident Laura Mollo used public records laws to investigate problems with the 911 system and, in response, experienced intense harassment from the city and its contractors, including the police pulling her over and the city appointing a special prosecutor to investigate her. On separate occasions, Morro even says she found her mailbox filled with spaghetti and manure.
Bashing the FBI has come back into vogue among certain partisan circles in recent years, but we've been slamming the feds long before it was trendy.
The agency received eight Foilies over the last decade, more than any other entity, but the FBI's hostility towards FOIA goes back much further. In 2021, the Cato Institute uncovered records showing that, since at least 1989, the FBI had been spying on the National Security Archive, a non-profit watchdog that keeps an eye on the intelligence community. The FBI’s methods included both physical and electronic surveillance, and the records show the FBI specifically cited the organization's "tenacity" in using FOIA.
Cato's Patrick G. Eddington reported it took 11 months for the FBI to produce those records, but that's actually relatively fast for the agency. We highlighted a 2009 FOIA request that the FBI took 12 years to fulfil: Bruce Alpert of the Times-Picayune had asked for records regarding the corruption case of U.S. Rep. William Jefferson, but by the time he received the 84 pages in 2021, the reporter had retired. Similarly, when George Washington University professor and documentary filmmaker Nina Seavey asked the FBI for records related to surveillance of antiwar and civil rights activists, the FBI told her it would take 17 years to provide the documents. When the agency launched an online system for accepting FOIA requests, it somehow made the process even more difficult.
The FBI was at its worst when it was attempting to use non-disclosure agreements to keep local law enforcement agencies from responding to public records requests regarding the use of cell phone surveillance technologies called cell-site simulators, or "stingrays." The agency even went so far as to threaten agencies that release technical information to media organizations with up to 20 years in prison and a $1 million fine, claiming it would be a violation of the Arms Export Control Act.
But you don't have to take our word for it: Even Micky Dolenz of The Monkees had to sue the FBI to get records on how agents collected intelligence on the 1960s band.
Some agencies, like the city of Chicago, treat FOIA requests like a plague.
Over the last decade, The Foilies have called out officials at all levels of government and in every part of the country (and even in several other countries), but time and time again, one city keeps demonstrating special antagonism to the idea of freedom of information: the Windy City.
In fact, the most ridiculous justification for ignoring transparency obligations we ever encountered was proudly championed by now-former Mayor Lori Lightfoot during the COVID-19 lockdown in April 2020. She offered a bogus choice to Chicagoans: the city could either process public records requests or provide pandemic response, falsely claiming that answering these requests would pull epidemiologists off the job. According to the Chicago Tribune, she implied that responding to FOIA requests would result in people having to "bury another grandmother." She even invoked the story of Passover, claiming that the "angel of death is right here in our midst every single day" as a reason to suspend FOIA deadlines.
If we drill down on Chicago, there's one particular department that seems to take particular pleasure in screwing the public: the Chicago Police Department (CPD). In 2021, CPD was nominated so many times (for withholding records of search warrants, a list of names of police officers, and body-worn camera footage from a botched raid) that we just threw up our hands and named them "The Hardest Department to FOIA" of the year.
In one particularly nasty case, CPD had mistakenly raided the home of an innocent woman and handcuffed her while she was naked and did not allow her to dress. Later, the woman filed a FOIA request for the body-worn camera footage and had to sue to get it. But CPD didn't leave it there: the city's lawyers tried to block a TV station from airing the video and then sought sanctions against the woman's attorney.
If you thought these were some doozies, check out The Foilies 2025 (to be published on March 16) to read the beginning of a new decade's worth of FOIA horror stories.
Right to Repair: A Prime Example of Grassroots Advocacy
Good old-fashioned grassroots advocacy is one of the best tools we have right now for making a positive change for our civil liberties online. When we unite toward a shared goal, anything is possible, and the right to repair movement is a prime example of this.
In July of last year, EFF and many other organizations celebrated Repair Independence Day to commemorate both California and Minnesota enacting strong right to repair laws. And, very recently, it was reported that all 50 states have introduced right to repair legislation. Now, not every state has passed laws yet, but this signals an important milestone for the movement—we want to fix the stuff we own!
And this movement has had an impact beyond specific right to repair legislation. In a similar vein, just a few months ago, the U.S. Copyright Office ruled that users can legally repair commercial food preparation equipment without breaking copyright law. Device manufacturers themselves are also starting to feel the pressure and are creating repair-friendly programs.
Years of hard work have made it possible for us to celebrate the right-to-repair movement time and time again. It's a group effort—folks like iFixit, who provide repair guides and repairability scores; the Repair Association, who’ve helped lead the movement in state legislatures; and of course, people like you who contact local representatives, are the reason this movement has gained so much momentum.
Fix Copyright! Also available in kids' sizes.
But there's still work that can be done. If you’re itching to fix your devices, you can read up on what your state’s repair laws mean for you. You can educate your friends, family, and colleagues when they’re frustrated at how expensive device repair is. And, of course, you can show your support for the right to repair movement with EFF’s latest member t-shirt.
We live in a very tumultuous time, so it’s important to celebrate the victories, and it’s equally important to remember that your voice and support can bring about positive change that you want to see.
EFF Sends Letter to the Senate Judiciary Committee Opposing the STOP CSAM Act
On Monday, March 10, EFF sent a letter to the Senate Judiciary Committee opposing the Strengthening Transparency and Obligation to Protect Children Suffering from Abuse and Mistreatment Act (STOP CSAM Act) ahead of a committee hearing on the bill.
EFF opposed the original and amended versions of this bill in the previous Congress, and we are concerned to see the Committee moving to consider the same flawed ideas in the current Congress.
At its core, STOP CSAM endangers encrypted messages – jeopardizing the privacy, security, and free speech of every American and fundamentally altering our online communications. In the digital world, end-to-end encryption is our best chance to maintain both individual and national security. Particularly in the wake of the major breach of telecom systems in October 2024 from Salt Typhoon, a sophisticated Chinese-government backed hacking group, legislators should focus on bolstering encryption, not weakening it. In fact, in response to this breach, a top U.S. cybersecurity chief said “encryption is your friend.”
Given its significant problems and potential vast impact on internet users, we urge the Committee to reject this bill.