EFF: Updates
Flock Safety’s Feature Updates Cannot Make Automated License Plate Readers Safe
Two recent statements from the surveillance company—one addressing Illinois privacy violations and another defending the company's national surveillance network—reveal a troubling pattern: when confronted by evidence of widespread abuse, Flock Safety has blamed users, downplayed harms, and doubled down on the very systems that enabled the violations in the first place.
Flock's aggressive public relations campaign to salvage its reputation comes as no surprise. Last month, we described how investigative reporting from 404 Media revealed that a sheriff's office in Texas searched data from more than 83,000 automated license plate reader (ALPR) cameras to track down a woman suspected of self-managing an abortion. (A scenario that may have been avoided, it's worth noting, had Flock taken action when they were first warned about this threat three years ago).
Flock calls the reporting on the Texas sheriff's office "purposefully misleading," claiming the woman was searched for as a missing person at her family's request rather than for her abortion. But that ignores the core issue: this officer used a nationwide surveillance dragnet (again: over 83,000 cameras) to track someone down, and used her suspected healthcare decisions as a reason to do so. Framing this as concern for her safety plays directly into anti-abortion narratives that depict abortion as dangerous and traumatic in order to justify increased policing, criminalization, control—and, ultimately, surveillance.
Flock Safety has blamed users, downplayed harms, and doubled down on the very systems that enabled the violations in the first place.
As if that weren't enough, the company has also come under fire for how its ALPR network data is being actively used to assist in mass deportation. Despite U.S. Immigration and Customs Enforcement (ICE) having no formal agreement with Flock Safety, public records revealed "more than 4,000 nation and statewide lookups by local and state police done either at the behest of the federal government or as an 'informal' favor to federal law enforcement, or with a potential immigration focus." The network audit data analyzed by 404 exposed an informal data-sharing environment that creates an end-run around direct oversight and accountability measures: federal agencies can access the surveillance network through local partnerships without the transparency and legal constraints that would apply to direct federal contracts.
Flock Safety is adamant this is "not Flock's decision," and by implication, not their fault. Instead, the responsibility lies with each individual local law enforcement agency. In the same breath, they’re adamant that data sharing is essential, loudly claiming credit when the technology is involved in cross-jurisdictional investigations—but failing to show the same attitude when that data-sharing ecosystem is used to terrorize abortion seekers or immigrants.
Flock Safety: The Surveillance Social NetworkIn growing from a 2017 startup to a $7.5 billion company "serving over 5,000 communities, Flock allowed individual agencies wide berth to set and regulate their own policies. In effect, this approach offered cheap surveillance technology with minimal restrictions, leaving major decisions and actions in the hands of law enforcement while the company scaled rapidly.
And they have no intention of slowing down. Just this week, Flock launched its Business Network, facilitating unregulated data sharing amongst its private sector security clients. "For years, our law enforcement customers have used the power of a shared network to identify threats, connect cases, and reduce crime. Now, we're extending that same network effect to the private sector," Flock Safety's CEO announced.
Flock Safety wooing law enforcement officers at the 2023 International Chiefs of Police Conference.
The company is building out a new mass surveillance network using the exact template that ended with the company having to retrain thousands of officers in Illinois on how not to break state law—the same template that made it easy for officers to do so in the first place. Flock's continued integration of disparate surveillance networks across the public and private spheres—despite the harms that have already occurred—is owed in part to the one thing that it's gotten really good at over the past couple of years: facilitating a surveillance social network.
Employing marketing phrases like "collaboration" and "force multiplier," Flock encourages as much sharing as possible, going as far as to claim that network effects can significantly improve case closure rates. They cultivate a sense of shared community and purpose among users so they opt into good faith sharing relationships with other law enforcement agencies across the country. But it's precisely that social layer that creates uncontrollable risk.
The possibility of human workarounds at every level undermines any technical safeguards Flock may claim. Search term blocking relies on officers accurately labeling search intent—a system easily defeated by entering vague reasons like "investigation" or incorrect justifications, made either intentionally or not. And, of course, words like "investigation" or "missing person" can mean virtually anything, offering no value to meaningful oversight of how and for what the system is being used. Moving forward, sheriff's offices looking to avoid negative press can surveil abortion seekers or immigrants with ease, so long as they use vague and unsuspecting reasons.
The same can be said for case number requirements, which depend on manual entry. This can easily be circumvented by reusing legitimate case numbers for unauthorized searches. Audit logs only track inputs, not contextual legitimacy. Flock's proposed AI-driven audit alerts, something that may be able to flag suspicious activity after searches (and harm) have already occurred, relies on local agencies to self-monitor misuse—despite their demonstrated inability to do so.
Flock operates as a single point of failure that can compromise—and has compromised—the privacy of millions of Americans simultaneously.
And, of course, even the most restrictive department policy may not be enough. Austin, Texas, implemented one of the most restrictive ALPR programs in the country, and the program still failed: the city's own audit revealed systematic compliance failures that rendered its guardrails meaningless. The company's continued appeal to "local policies" means nothing when Flock's data-sharing network does not account for how law enforcement policies, regulations, and accountability vary by jurisdiction. You may have a good relationship with your local police, who solicit your input on what their policy looks like; you don't have that same relationship with hundreds or thousands of other agencies with whom they share their data. So if an officer on the other side of the country violates your privacy, it’d be difficult to hold them accountable.
ALPR surveillance systems are inherently vulnerable to both technical exploitation and human manipulation. These vulnerabilities are not theoretical—they represent real pathways for bad actors to access vast databases containing millions of Americans' location data. When surveillance databases are breached, the consequences extend far beyond typical data theft—this information can be used to harass, stalk, or even extort. The intimate details of people's daily routines, their associations, and their political activities may become available to anyone with malicious intent. Flock operates as a single point of failure that can compromise—and has compromised—the privacy of millions of Americans simultaneously.
Don't Stop de-FlockingRather than addressing legitimate concerns about privacy, security, and constitutional rights, Flock has only promised updates that fall short of meaningful reforms. These software tweaks and feature rollouts cannot assuage the fear engendered by the massive surveillance system it has built and continues to expand.
A typical specimen of Flock Safety's automated license plate readers.
Flock's insistence that what's happening with abortion criminalization and immigration enforcement has nothing to do with them—that these are just red-state problems or the fault of rogue officers—is concerning. Flock designed the network that is being used, and the public should hold them accountable for failing to build in protections from abuse that cannot be easily circumvented.
Thankfully, that's exactly what's happening: cities like Austin, San Marcos, Denver, Norfolk, and San Diego are pushing back. And it's not nearly as hard a choice as Flock would have you believe: Austinites are weighing the benefits of a surveillance system that generates a hit less than 0.02% of the time against the possibility that scanning 75 million license plates will result in an abortion seeker being tracked down by police, or an immigrant being flagged by ICE in a so-called "sanctuary city." These are not hypotheticals. It is already happening.
Given how pervasive, sprawling, and ungovernable ALPR sharing networks have become, the only feature update we can truly rely on to protect people's rights and safety is no network at all. And we applaud the communities taking decisive action to dismantle its surveillance infrastructure.
Follow their lead: don't stop de-flocking.
Today's Supreme Court Decision on Age Verification Tramples Free Speech and Undermines Privacy
Today’s decision in Free Speech Coalition v. Paxton is a direct blow to the free speech rights of adults. The Court ruled that “no person—adult or child—has a First Amendment right to access speech that is obscene to minors without first submitting proof of age.” This ruling allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law.
Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general. We will continue to fight against age restrictions on online access more broadly, such as on social media and specific online features.
Still, the decision has immense consequences for internet users in Texas and in other states that have enacted similar laws. The Texas law forces adults to submit personal information over the internet to access entire websites that hold some amount of sexual material, not just pages or portions of sites that contain specific sexual materials. Many sites that cannot reasonably implement age verification measures for reasons such as cost or technical requirements will likely block users living in Texas and other states with similar laws wholesale.
Importantly, the Court's reasoning applies only to age-verification rules for certain sexual material, and not to age limits in general.
Many users will not be comfortable sharing private information to access sites that do implement age verification, for reasons of privacy or concern for data breaches. Many others do not have a driver’s license or photo ID to complete the age verification process. This decision will, ultimately, deter adult users from speaking and accessing lawful content, and will endanger the privacy of those who choose to go forward with verification.
What the Court Said TodayIn the 6-3 decision, the Court ruled that Texas’ HB 1181 is constitutional. This law requires websites that Texas decides are composed of “one-third” or more of “sexual material harmful to minors” to confirm the age of users by collecting age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.
In 1997, the Supreme Court struck down a federal online age-verification law in Reno v. American Civil Liberties Union. In that case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB 1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.
In Reno and in subsequent cases, the Supreme Court ruled that laws that burden adults’ access to lawful speech are subjected to the highest level of review under the First Amendment, known as strict scrutiny. This level of scrutiny requires a law to be very narrowly tailored or the least speech-restrictive means available to the government.
That all changed with the Supreme Court’s decision today.
The Court now says that laws that burden adults’ access to sexual materials that are obscene to minors are subject to less-searching First Amendment review, known as intermediate scrutiny. And under that lower standard, the Texas law does not violate the First Amendment. The Court did not have to respond to arguments that there are less speech-restrictive ways of reaching the same goal—for example, encouraging parents to install content-filtering software on their children’s devices.
The court reached this decision by incorrectly assuming that online age verification is functionally equivalent to flashing an ID at a brick-and-mortar store. As we explained in our amicus brief, this ignores the many ways in which verifying age online is significantly more burdensome and invasive than doing so in person. As we and many others have previously explained, unlike with in-person age-checks, the only viable way for a website to comply with an age verification requirement is to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information.
This leads to a host of serious anonymity, privacy, and security concerns—all of which the majority failed to address. A person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. This leaves users highly vulnerable to data breaches and other security harms. Age verification also undermines anonymous internet browsing, even though courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.
This Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception
The Court sidestepped its previous online age verification decisions by claiming the internet has changed too much to follow the precedent from Reno that requires these laws to survive strict scrutiny. Writing for the minority, Justice Kagan disagreed with the premise that the internet has changed: “the majority’s claim—again mistaken—that the internet has changed too much to follow our precedents’ lead.”
But the majority argues that past precedent does not account for the dramatic expansion of the internet since the 1990s, which has led to easier and greater internet access and larger amounts of content available to teens online. The majority’s opinion entirely fails to address the obvious corollary: the internet’s expansion also has benefited adults. Age verification requirements now affect exponentially more adults than they did in the 1990s and burden vastly more constitutionally protected online speech. The majority's argument actually demonstrates that the burdens on adult speech have grown dramatically larger because of technological changes, yet the Court bizarrely interprets this expansion as justification for weaker constitutional protection.
What It Means Going ForwardThis Supreme Court broke a fundamental agreement between internet users and the state that has existed since its inception: the government will not stand in the way of people accessing First Amendment-protected material. There is no question that multiple states will now introduce similar laws to Texas. Two dozen already have, though they are not all in effect. At least three of those states have no limit on the percentage of material required before the law applies—a sweeping restriction on every site that contains any material that the state believes the law includes. These laws will force U.S.-based adult websites to implement age-verification or block users in those states, as many have in the past when similar laws were in effect.
Rather than submit to verification, research has found that people will choose a variety of other paths: using VPNs to indicate that they are outside of the state, accessing similar sites that don’t comply with the law, often because the site is operating in a different country. While many users will simply not access the content as a result, others may accept the risk, at their peril.
We expect some states to push the envelope in terms of what content they consider “harmful to minors,” and to expand the type of websites that are covered by these laws, either through updated language or threats of litigation. Even if these attacks are struck down, operators of sites that involve sexual content of any type may be under threat, especially if that information is politically divisive. We worry that the point of some of these laws will be to deter queer folks and others from accessing lawful speech and finding community online by requiring them to identify themselves. We will continue to fight to protect against the disclosure of this critical information and for people to maintain their anonymity.
EFF Will Continue to Fight for All Users’ Free Expression and PrivacyThat said, the ruling does not give states or Congress the green light to impose age-verification regulations on the broader internet. The majority’s decision rests on the fact that minors do not have a First Amendment right to access sexual material that would be obscene. In short, adults have a First Amendment right to access those sexual materials, while minors do not. Although it was wrong, the majority’s opinion ruled that because Texas is blocking minors from speech they have no constitutional right to access, the age-verification requirement only incidentally burdens adult’s First Amendment rights.
But the same rationale does not apply to general-audience sites and services, including social media. Minors and adults have coextensive rights to both speak and access the speech of other users on these sites because the vast majority of the speech is not sexual materials that would be obscene to minors. Lawmakers should be careful not to interpret this ruling to mean that broader restrictions on minors’ First Amendment rights, like those included in the Kids Online Safety Act, would be deemed constitutional.
Free Speech Coalition v. Paxton will have an effect on nearly every U.S. adult internet user for the foreseeable future. It marks a worrying shift in the ways that governments can restrict access to speech online. But that only means we must work harder than ever to protect privacy, security, and free speech as central tenets of the internet.
Georgia Court Rules for Transparency over Private Police Foundation
A Georgia court has decided that private non-profit Atlanta Police Foundation (APF) must comply with public records requests under the Georgia Open Records Act for some of its functions on behalf of the Atlanta Police Department. This is a major win for transparency in the state.
The lawsuit was brought last year by the Atlanta Community Press Collective (ACPC) and Electronic Frontier Alliance member Lucy Parsons Labs (LPL). It concerns the APF’s refusal to disclose records about its role as the leaser and manager of the site of so-called Cop City, the Atlanta Public Safety Training Center at the heart of a years-long battle that pitted local social and environmental movements against the APF. We’ve previously written about how APF and similar groups fund police surveillance technology, and how the Atlanta Police Department spied on the social media of activists opposed to Cop City.
This is a big win for transparency and for local communities who want to maintain their right to know what public agencies are doing.
Police Foundations often provide resources to police departments that help them avoid public oversight, and the Atlanta Police Foundation leads the way with its maintenance of the Loudermilk Video Intergration Center and its role in Cop City, which will be used by public agencies including the Atlanta and other police departments.
ACPC and LPL were represented by attorneys Joy Ramsingh, Luke Andrews, and Samantha Hamilton who had won the release of some materials this past December. The plaintiffs had earlier been represented by the University of Georgia School of Law First Amendment Clinic.
The win comes at just the right time. Last Summer, the Georgia Supreme Court ruled that private contractors working for public entities are subject to open records laws. The Georgia state legislature then passed a bill to make it harder to file public records requests against private entities. With this month’s ruling, there is still time for the Atlanta Police Foundation to appeal the decision, but failing that, they will have to begin to comply with public records requests by the beginning of July.
We hope that this will help ensure transparency and accountability when government agencies farm out public functions to private entities, so that local activists and journalists will be able to uncover materials that should be available to the general public.
Two Courts Rule On Generative AI and Fair Use - One Gets It Right
Things are speeding up in generative AI legal cases, with two judicial opinions just out on an issue that will shape the future of generative AI: whether training gen-AI models on copyrighted works is fair use. One gets it spot on; the other, not so much, but fortunately in a way that future courts can and should discount.
The core question in both cases was whether using copyrighted works to train Large Language Models (LLMs) used in AI chatbots is a lawful fair use. Under the US Copyright Act, answering that question requires courts to consider:
- whether the use was transformative;
- the nature of the works (Are they more creative than factual? Long since published?)
- how much of the original was used; and
- the harm to the market for the original work.
In both cases, the judges focused on factors (1) and (4).
The right approachIn Bartz v. Anthropic, three authors sued Anthropic for using their books to train its Claude chatbot. In his order deciding parts of the case, Judge William Alsup confirmed what EFF has said for years: fair use protects the use of copyrighted works for training because, among other things, training gen-AI is “transformative—spectacularly so” and any alleged harm to the market for the original is pure speculation.Just as copying books or images to create search engines is fair, the court held, copying books to create a new, “transformative” LLM and related technologies is also protected:
[U]sing copyrighted works to train LLMs to generate new text was quintessentially transformative. Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.
Importantly, Bartz rejected the copyright holders’ attempts to claim that any model capable of generating new written material that might compete with existing works by emulating their “sweeping themes, “substantive points,” or “grammar, composition, and style” was an infringement machine. As the court rightly recognized, building gen-AI models that create new works is beyond “anything that any copyright owner rightly could expect to control.”
There’s a lot more to like about the Bartz ruling, but just as we were digesting it Kadrey v. Meta Platforms came out. Sadly, this decision bungles the fair use analysis.
A fumble on fair useKadrey is another suit by authors against the developer of an AI model, in this case Meta’s ‘Llama’ chatbot. The authors in Kadrey asked the court to rule that fair use did not apply.
Much of the Kadrey ruling by Judge Vince Chhabria is dicta—meaning, the opinion spends many paragraphs on what it thinks could justify ruling in favor of the author plaintiffs, if only they had managed to present different facts (rather than pure speculation). The court then rules in Meta’s favor because the plaintiffs only offered speculation.
But it makes a number of errors along the way to the right outcome. At the top, the ruling broadly proclaims that training AI without buying a license to use each and every piece of copyrighted training material will be “illegal” in “most cases.” The court asserted that fair use usually won’t apply to AI training uses even though training is a “highly transformative” process, because of hypothetical “market dilution” scenarios where competition from AI-generated works could reduce the value of the books used to train the AI model..
That theory, in turn, depends on three mistaken premises. First, that the most important factor for determining fair use is whether the use might cause market harm. That’s not correct. Since its seminal 1994 opinion in Cambell v Acuff-Rose, the Supreme Court has been very clear that no single factor controls the fair use analysis.
Second, that an AI developer would typically seek to train a model entirely on a certain type of work, and then use that model to generate new works in the exact same genre, which would then compete with the works on which it was trained, such that the market for the original works is harmed. As the Kadrey ruling notes, there was no evidence that Llama was intended to to, or does, anything like that, nor will most LLMs for the exact reasons discussed in Bartz.
Third, as a matter of law, copyright doesn't prevent “market dilution” unless the new works are otherwise infringing. In fact, the whole purpose of copyright is to be an engine for new expression. If that new expression competes with existing works, that’s a feature, not a bug.
Gen-AI is spurring the kind of tech panics we’ve seen before; then, as now, thoughtful fair use opinions helped ensure that copyright law served innovation and creativity. Gen-AI does raise a host of other serious concerns about fair labor practices and misinformation, but copyright wasn’t designed to address those problems. Trying to force copyright law to play those roles only hurts important and legal uses of this technology.
In keeping with that tradition, courts deciding fair use in other AI copyright cases should look to Bartz, not Kadrey.
Ahead of Budapest Pride, EFF and 46 Organizations Call on European Commission to Defend Fundamental Rights in Hungary
This week, EFF joined EDRi and nearly 50 civil society organizations urging the European Commission’s President Ursula von der Leyen, Executive Vice President Henna Virkunnen, and Commissioners Michael McGrath and Hadja Lahbib to take immediate action and defend human rights in Hungary.
The European Commission has a responsibility to protect EU fundamental rights, including the rights of LGBTQ+ individuals in Hungary and across the Union
With Budapest Pride just two days away, Hungary has criminalized Pride marches and is planning to deploy real-time facial recognition technology to identify those participating in the event. This is a flagrant violation of fundamental rights, particularly the rights to free expression and assembly.
On April 15, a new amendment package went into effect in Hungary which authorizes the use of real-time facial recognition to identify protesters at ‘banned protests’ like LGBTQ+ events, and includes harsh penalties like excessive fines and imprisonment. This is prohibited by the EU Artificial Intelligence (AI) Act, which does not permit the use of real-time face recognition for these purposes.
This came on the back of members of Hungary’s Parliament rushing through three amendments in March to ban and criminalize Pride marches and their organizers, and permit the use of real-time facial recognition technologies for the identification of protestors. These amendments were passed without public consultation and are in express violation of the EU AI Act and Charter of Fundamental Rights. In response, civil society organizations urged the European Commission to put interim measures in place to rectify the violation of fundamental rights and values. The Commission is yet to respond—a real cause of concern.
This is an attack on LGBTQ+ individuals, as well as an attack on the rights of all people in Hungary. The letter urges the European Commission to take the following actions:
- Open an infringement procedure against any new violations of EU law, in particular the violation of Article 5 of the AI Act
- Adopt interim measures on ongoing infringement against Hungary’s 2021 anti LGBT law which is used as a legal basis for the ban on LGBTQIA+ related public assemblies, including Budapest Pride.
There's no question that, when EU law is at stake, the European Commission has a responsibility to protect EU fundamental rights, including the rights of LGBTQ+ individuals in Hungary and across the Union. This includes ensuring that those organizing and marching at Pride in Budapest are safe and able to peacefully assemble and protest. If the EU Commission does not urgently act to ensure these rights, it risks hollowing out the values that the EU is built from.
Read our full letter to the Commission here.
How Cops Can Get Your Private Online Data
Can the cops get your online data? In short, yes. There are a variety of US federal and state laws which give law enforcement powers to obtain information that you provided to online services. But, there are steps you as a user and/or as a service provider can take to improve online privacy.
Law enforcement demanding access to your private online data goes back to the beginning of the internet. In fact, one of EFF’s first cases, Steve Jackson Games v. Secret Service, exemplified the now all-too-familiar story where unfounded claims about illegal behavior resulted in overbroad seizures of user messages. But it’s not the ’90s anymore, the internet has become an integral part of everyone’s life. Everyone now relies on organizations big and small to steward our data, from huge service providers like Google, Meta, or your ISP, to hobbyists hosting a blog or Mastodon server.
There is no “cloud,” just someone else's computer—and when the cops come knocking on their door, these hosts need to be willing to stand up for privacy, and know how to do so to the fullest extent under the law. These legal limits are also important for users to know, not only to mitigate risks in their security plan when choosing where to share data, but to understand whether these hosts are going to bat for them. Taking action together, service hosts and users can curb law enforcement getting more data than they’re allowed, protecting not just themselves but targeted populations, present and future.
This is distinct from law enforcement’s methods of collecting public data, such as the information now being collected on student visa applicants. Cops may use social media monitoring tools and sock puppet accounts to collect what you share publicly, or even within “private” communities. Police may also obtain the contents of communication in other ways that do not require court authorization, such as monitoring network traffic passively to catch metadata and possibly using advanced tools to partially reveal encrypted information. They can even outright buy information from online data brokers. Unfortunately there are few restrictions or oversight for these practices—something EFF is fighting to change.
Below however is a general breakdown of the legal processes used by US law enforcement for accessing private data, and what categories of private data these processes can disclose. Because this is a generalized summary, it is neither exhaustive nor should be considered legal advice. Please seek legal help if you have specific data privacy and security needs.
Type of data
Process used
Challenge prior to disclosure?
Proof needed
Subscriber information
Subpoena
Yes
Relevant to an investigation
Non-content information, metadata
Court order; sometimes subpoena
Yes
Specific and articulable facts that info is relevant to an investigation
Stored content
Search warrant
No
Probable cause that info will provide evidence of a crime
Content in transit
Super warrant
No
Probable cause plus exhaustion and minimization
Types of Data that Can be CollectedThe laws protecting private data online generally follow a pattern: the more sensitive the personal data is, the greater factual and legal burden police have to meet before they can obtain it. Although this is not exhaustive, here are a few categories of data you may be sharing with services, and why police might want to obtain it.
- Subscriber Data: Information you provide in order to use the service. Think about ID or payment information, IP address location, email, phone number, and other information you provided when signing up.
- Law enforcement can learn who controls an anonymous account, and find other service providers to gather information from.
- Non-content data, or "metadata": This is saved information about your interactions on the service; like when you used the service, for how long, and with whom. Analogous to what a postal worker can infer from a sealed letter with addressing information.
- Law enforcement can use this information to infer a social graph, login history, and other information about a suspect’s behavior.
- Stored content: This is the actual content you are sending and receiving, like your direct message history or saved drafts. This can cover any private information your service provider can access.
- This most sensitive data is collected to reveal criminal evidence. Overly broad requests also allow for retroactive searches, information on other users, and can take information out of its original context.
- Content in transit: This is the content of your communications as it is being communicated. This real-time access may also collect info which isn’t typically stored by a provider, like your voice during a phone call.
- Law enforcement can compel providers to wiretap their own services for a particular user—which may also implicate the privacy of users they interact with.
When US law enforcement has identified a service that likely has this data, they have a few tools to legally compel that service to hand it over and prevent users from knowing information is being collected.
SubpoenaSubpoenas are demands from a prosecutor, law enforcement, or a grand jury which do not require approval of a judge before being sent to a service. The only restriction is this demand be relevant to an investigation. Often the only time a court reviews a subpoena is when a service or user challenges it in court.
Due to the lack of direct court oversight in most cases, subpoenas are prone to abuse and overreach. Providers should scrutinize such requests carefully with a lawyer and push back before disclosure, particularly when law enforcement tries to use subpoenas to obtain more private data, such as the contents of communications.
Court OrderThis is a similar demand to subpoenas, but usually pertains to a specific statute which requires a court to authorize the demand. Under the Stored Communications Act, for example, a court can issue an order for non-content information if police provide specific facts that the information being sought is relevant to an investigation.
Like subpoenas, providers can usually challenge court orders before disclosure and inform the user(s) of the request, subject to law enforcement obtaining a gag order (more on this below).
Search WarrantA warrant is a demand issued by a judge to permit police to search specific places or persons. To obtain a warrant, police must submit an affidavit (a written statement made under oath) establishing that there is a fair probability (or “probable cause”) that evidence of a crime will be found at a particular place or on a particular person.
Typically services cannot challenge a warrant before disclosure, as these requests are already approved by a magistrate. Sometimes police request that judges also enter gag orders against the target of the warrant that prevent hosts from informing the public or the user that the warrant exists.
Super WarrantPolice seeking to intercept communications as they occur generally face the highest legal burden. Usually the affidavit needs to not only establish probable cause, but also make clear that other investigation methods are not viable (exhaustion) and that the collection avoids capturing irrelevant data (minimization).
Some laws also require high-level approval within law enforcement, such as leadership, to approve the request. Some laws also limit the types of crimes that law enforcement may use wiretaps in while they are investigating. The laws may also require law enforcement to periodically report back to the court about the wiretap, including whether they are minimizing collection of non-relevant communications.
Generally these demands cannot be challenged while wiretapping is occurring, and providers are prohibited from telling the targets about the wiretap. But some laws require disclosure to targets and those who were communicating with them after the wiretap has ended.
Gag ordersMany of the legal authorities described above also permit law enforcement to simultaneously prohibit the service from telling the target of the legal process or the general public that the surveillance is occurring. These non-disclosure orders are prone to abuse and EFF has repeatedly fought them because they violate the First Amendment and prohibit public understanding about the breadth of law enforcement surveillance.
How Services Can (and Should) Protect YouThis process isn't always clean-cut, and service providers must ultimately comply with lawful demands for user’s data, even when they challenge them and courts uphold the government’s demands.
Service providers outside the US also aren’t totally in the clear, as they must often comply with US law enforcement demands. This is usually because they either have a legal presence in the US or because they can be compelled through mutual legal assistance treaties and other international legal mechanisms.
However, services can do a lot by following a few best practices to defend user privacy, thus limiting the impact of these requests and in some cases make their service a less appealing door for the cops to knock on.
Put Cops through the ProcessParamount is the service provider's willingness to stand up for their users. Carving out exceptions or volunteering information outside of the legal framework erodes everyone's right to privacy. Even in extenuating and urgent circumstances, the responsibility is not on you to decide what to share, but on the legal process.
Smaller hosts, like those of decentralized services, might be intimidated by these requests, but consulting legal counsel will ensure requests are challenged when necessary. Organizations like EFF can sometimes provide legal help directly or connect service providers with alternative counsel.
Challenge Bad RequestsIt’s not uncommon for law enforcement to overreach or make burdensome requests. Before offering information, services can push back on an improper demand informally, and then continue to do so in court. If the demand is overly broad, violates a user's First or Fourth Amendment rights, or has other legal defects, a court may rule that it is invalid and prevent disclosure of the user’s information.
Even if a court doesn’t invalidate the legal demand entirely, pushing back informally or in court can limit how much personal information is disclosed and mitigate privacy impacts.
Provide NoticeUnless otherwise restricted, service providers should give notice about requests and disclosures as soon as they can. This notice is vital for users to seek legal support and prepare a defense.
Be Clear With UsersIt is important for users to understand if a host is committed to pushing back on data requests to the full extent permitted by law. Privacy policies with fuzzy thresholds like "when deemed appropriate" or “when requested” make it ambiguous if a user’s right to privacy will be respected. The best practices for providers not only require clarity and a willingness to push back on law enforcement demands, but also a commitment to be transparent with the public about law enforcement’s demands. For example, with regular transparency reports breaking down the countries and states making these data requests.
Social media services should also consider clear guidelines for finding and removing sock puppet accounts operated by law enforcement on the platform, as these serve as a backdoor to government surveillance.
Minimize Data CollectionYou can't be compelled to disclose data you don’t have. If you collect lots of user data, law enforcement will eventually come demanding it. Operating a service typically requires some collection of user data, even if it’s just login information. But the problem is when information starts to be collected beyond what is strictly necessary.
This excess collection can be seen as convenient or useful for running the service, or often as potentially valuable like behavioral tracking used for advertising. However, the more that’s collected, the more the service becomes a target for both legal demands and illegal data breaches.
For data that enables desirable features for the user, design choices can make privacy the default and give users additional (preferably opt-in) sharing choices.
Shorter RetentionAs another minimization strategy, hosts should regularly and automatically delete information when it is no longer necessary. For example, deleting logs of user activity can limit the scope of law enforcement’s retrospective surveillance—maybe limiting a court order to the last 30 days instead of the lifetime of the account.
Again design choices, like giving users the ability to send disappearing messages and deleting them from the server once they’re downloaded, can also further limit the impact of future data requests. Furthermore, these design choices should have privacy-preserving default
Avoid Data SharingDepending on the service being hosted there may be some need to rely on another service to make everything work for users. Third-party login or ad services are common examples with some amount of tracking built in. Information shared with these third-parties should also be minimized and avoided, as they may not have a strict commitment to user privacy. Most notoriously, data brokers who sell advertisement data can provide another legal work-around for law enforcement by letting them simply buy collected data across many apps. This extends to decisions about what information is made public by default, thus accessible to many third parties, and if that is clear to users.
Now that HTTPS is actually everywhere, most traffic between a service and a user can be easily secured—for free. This limits what onlookers can collect on users of the service, since messages between the two are in a secure “envelope.” However, this doesn’t change the fact the service is opening this envelope before passing it along to other users, or returning it to the same user. With each opened message, this is more information to defend.
Better, is end-to-end encryption (e2ee), which just means providing users with secure envelopes that even the service provider cannot open. This is how a featureful messaging app like Signal can respond to requests with only three pieces of information: the account identifier (phone number), the date of creation, and the last date of access. Many services should follow suit and limit access through encryption.
Note that while e2ee has become a popular marketing term, it is simply inaccurate for describing any encryption use designed to be broken or circumvented. Implementing “encryption backdoors” to break encryption when desired, or simply collecting information before or after the envelope is sealed on a user’s device (“client-side scanning”) is antithetical to encryption. Finally, note that e2ee does not protect against law enforcement obtaining the contents of communications should they gain access to any device used in the conversation, or if message history is stored on the server unencrypted.
Protecting Yourself and Your CommunityAs outlined, often the security of your personal data depends on the service providers you choose to use. But as a user you do still have some options. EFF’s Surveillance Self-Defense is a maintained resource with many detailed steps you can take. In short, you need to assess your risks, limit the services you use to those you can trust (as much as you can), improve settings, and when all else fails, accessorize with tools that prevent data sharing in the first place—like EFF’s Privacy Badger browser extension.
Remember that privacy is a team sport. It’s not enough to make these changes as an individual, it’s just as important to share and educate others, as well as fighting for better digital privacy policy on all levels of governance. Learn, get organized, and take action.
California’s Corporate Cover-Up Act Is a Privacy Nightmare
California lawmakers are pushing one of the most dangerous privacy rollbacks we’ve seen in years. S.B. 690, what we’re calling the Corporate Cover-Up Act, is a brazen attempt to let corporations spy on us in secret, gutting long-standing protections without a shred of accountability.
The Corporate Cover-Up Act is a massive carve-out that would gut California’s Invasion of Privacy Act (CIPA) and give Big Tech and data brokers a green light to spy on us without consent for just about any reason. If passed, S.B. 690 would let companies secretly record your clicks, calls, and behavior online—then share or sell that data with whomever they’d like, all under the banner of a “commercial business purpose.”
Simply put, The Corporate Cover-Up Act (S.B. 690) is a blatant attack on digital privacy, and is written to eviscerate long-standing privacy laws and legal safeguards Californians rely on. If passed, it would:
- Gut California’s Invasion of Privacy Act (CIPA)—a law that protects us from being secretly recorded or monitored
- Legalize corporate wiretaps, allowing companies to intercept real-time clicks, calls, and communications
- Authorize pen registers and trap-and-trace tools, which track who you talk to, when, and how—without consent
- Let companies use all of this surveillance data for “commercial business purposes”—with zero notice and no legal consequences
This isn’t a small fix. It’s a sweeping rollback of hard-won privacy protections—the kind that helped expose serious abuses by companies like Facebook, Google, and Oracle.
You Can't Opt Out of Surveillance You Don't Know Is HappeningProponents of The Corporate Cover-Up Act claim it’s just a “clarification” to align CIPA with the California Consumer Privacy Act (CCPA). That’s misleading. The truth is, CIPA and CCPA don’t conflict. CIPA stops secret surveillance. The CCPA governs how data is used after it’s collected, such as through the right to opt out of your data being shared.
You can't opt out of being spied on if you’re never told it’s happening in the first place. Once companies collect your data under S.B. 690, they can:
- Sell it to data brokers
- Share it with immigration enforcement or other government agencies
- Use it to against abortion seekers, LGBTQ+ people, workers, and protesters, and
- Retain it indefinitely for profiling
…with no consent; no transparency; and no recourse.
The Communities Most at RiskThis bill isn’t just a tech policy misstep. It’s a civil rights disaster. If passed, S.B. 690 will put the most vulnerable people in California directly in harm’s way:
- Immigrants, who may be tracked and targeted by ICE
- LGBTQ+ individuals, who could be outed or monitored without their knowledge
- Abortion seekers, who could have location or communications data used against them
- Protesters and workers, who rely on private conversations to organize safely
The message this bill sends is clear: corporate profits come before your privacy.
We Must Act NowS.B. 690 isn’t just a bad tech bill—it’s a dangerous precedent. It tells every corporation: Go ahead and spy on your consumers—we’ve got your back.
Californians deserve better.
If you live in California, now is the time to call your lawmakers and demand they vote NO on the Corporate Cover-Up Act.
Spread the word, amplify the message, and help stop this attack on privacy before it becomes law.
FBI Warning on IoT Devices: How to Tell If You Are Impacted
On June 5th, the FBI released a PSA titled “Home Internet Connected Devices Facilitate Criminal Activity.” This PSA largely references devices impacted by the latest generation of BADBOX malware (as named by HUMAN’s Satori Threat Intelligence and Research team) that EFF researchers also encountered primarily on Android TV set-top boxes. However, the malware has impacted tablets, digital projectors, aftermarket vehicle infotainment units, picture frames, and other types of IoT devices.
One goal of this malware is to create a network proxy on the devices of unsuspecting buyers, potentially making them hubs for various potential criminal activities, putting the owners of these devices at risk from authorities. This malware is particularly insidious, coming pre-installed out of the box from major online retailers such as Amazon and AliExpress. If you search “Android TV Box” on Amazon right now, many of the same models that have been impacted are still up being sold by sellers of opaque origins. Facilitating the sale of these devices even led us to write an open letter to the FTC, urging them to take action on resellers.
The FBI listed some indicators of compromise (IoCs) in the PSA for consumers to tell if they were impacted. But the average person isn’t running network detection infrastructure in their homes, and cannot hope to understand what IoCs can be used to determine if their devices generate “unexplained or suspicious Internet traffic.” Here, we will attempt to help give more comprehensive background information about these IoCs. If you find any of these on devices you own, then we encourage you to follow through by contacting the FBI's Internet Crime Complaint Center (IC3) at www.ic3.gov.
The FBI lists these IoC:
- The presence of suspicious marketplaces where apps are downloaded.
- Requiring Google Play Protect settings to be disabled.
- Generic TV streaming devices advertised as unlocked or capable of accessing free content.
- IoT devices advertised from unrecognizable brands.
- Android devices that are not Play Protect certified.
- Unexplained or suspicious Internet traffic.
The following adds context to above, as well as some added IoCs we have seen from our research.
Play Protect Certified
“Android devices that are not Play Protect certified” refers to any device brand or partner not listed here: https://www.android.com/certified/partners/. Google subjects devices to compatibility and security tests in their criteria for inclusion in the Play Protect program, though the mentioned list’s criteria are not made completely transparent outside of Google. But this list does change, as we saw with the tablet brand we researched being de-listed. This encompasses “devices advertised from unrecognizable brands.” The list includes international brands and partners as well.
Outdated Operating Systems
Other issues we saw were really outdated Android versions. For posterity, Android 16 just started rolling out. Android 9-12 appeared to be the most common versions routinely used. This could be a result of “copied homework” from previous legitimate Android builds, and often come with their own update software that can present a problem on its own and deliver second-stage payloads for device infection in addition to what it is downloading and updating on the device.
You can check which version of Android you have by going to Settings and searching “Android version”.
Android App Marketplaces
We’ve previously argued how the availability of different app marketplaces leads to greater consumer choice, where users can choose alternatives even more secure than the Google Play Store. While this is true, the FBI’s warning about suspicious marketplaces is also prudent. Avoiding “downloading apps from unofficial marketplaces advertising free streaming content” is sound (if somewhat vague) advice for set-top boxes, yet this recommendation comes without further guidelines on how to identify which marketplaces might be suspicious for other Android IoT platforms. Best practice is to investigate any app stores used on Android devices separately, but to be aware that if a suspicious Android device is purchased, it can contain preloaded app stores that mimic the functionality of legitimate ones but also contain unwanted or malicious code.
Models Listed from the Badbox Report
We also recommend looking up device names and models that were listed in the BADBOX 2.0 report. We investigated the T95 models along with other independent researchers that initially found this malware present. A lot of model names could be grouped in families with the same letters but different numbers. These operations are iterating fast, but the naming conventions are often lazy in this respect. If you're not sure what model you own, you can usually find it listed on a sticker somewhere on the device. If that fails, you may be able to find it by pulling up the original receipt or looking through your order history.
A Note from Satori Researchers:
“Below is a list of device models known to be targeted by the threat actors. Not all devices of a given model are necessarily infected, but Satori researchers are confident that infections are present on some devices of the below device models:”
List of Potentially Impacted Models
Broader Picture: The Digital Divide
Unfortunately, the only way to be sure that an Android device from an unknown brand is safe is not to buy it in the first place. Though initiatives like the U.S. Cyber Trust Mark are welcome developments intended to encourage demand-side trust in vetted products, recent shake ups in federal regulatory bodies means the future of this assurance mark is unknown. This means those who face budget constraints and have trouble affording top-tier digital products for streaming content or other connected purposes may rely on cheaper imitation products that are rife with not only vulnerabilities, but even come out-of-the-box preloaded with malware. This puts these people disproportionately at legal risk when these devices are used to provide the buyers’ home internet connection as a proxy for nefarious or illegal purposes.
Cybersecurity and trust that the products we buy won’t be used against us is essential: not just for those that can afford name-brand digital devices, but for everyone. While we welcome the IoCs that the FBI has listed in its PSA, more must be done to protect consumers from a myriad of dangers that their devices expose them to.
Why Are Hundreds of Data Brokers Not Registering with States?
Written in collaboration with Privacy Rights Clearinghouse
Hundreds of data brokers have not registered with state consumer protection agencies. These findings come as more states are passing data broker transparency laws that require brokers to provide information about their business and, in some cases, give consumers an easy way to opt out.
In recent years, California, Texas, Oregon, and Vermont have passed data broker registration laws that require brokers to identify themselves to state regulators and the public. A new analysis by Privacy Rights Clearinghouse (PRC) and the Electronic Frontier Foundation (EFF) reveals that many data brokers registered in one state aren’t registered in others.
Companies that registered in one state but did not register in another include: 291 companies that did not register in California, 524 in Texas, 475 in Oregon, and 309 in Vermont. These numbers come from data analyzed from early April 2025.
PRC and EFF sent letters to state enforcement agencies urging them to investigate these findings. More investigation by states is needed to determine whether these registration discrepancies reflect widespread noncompliance, gaps and definitional differences in the various state laws, or some other explanation.
New data broker transparency laws are an essential first step to reining in the data broker industry. This is an ecosystem in which your personal data taken from apps and other web services can be bought and sold largely without your knowledge. The data can be highly sensitive like location information, and can be used to target you with ads, discriminate against you, and even enhance government surveillance. The widespread sharing of this data also makes it more susceptible to data breaches. And its easy availability allows personal data to be obtained by bad actors for phishing, harassment, or stalking.
Consumers need robust deletion mechanisms to remove their data stored and sold by these companies. But the potential registration gaps we identified threaten to undermine such tools. California’s Delete Act will soon provide consumers with an easy tool to delete their data held by brokers—but it can only work if brokers register. California has already brought a handful of enforcement actions against brokers who failed to register under that law, and such compliance efforts are becoming even more critical as deletion mechanisms come online.
It is important to understand the scope of our analysis.
This analysis only includes companies that registered in at least one state. It does not capture data brokers that completely disregard state laws by failing to register in any state. A total of 750 data brokers have registered in at least one state. While harder to find, shady data brokers who have failed to register anywhere should remain a primary enforcement target.
This analysis also does not claim or prove that any of the data brokers we found broke the law. While the definition of “data broker” is similar across states, there are variations that could require a company to register in one state and not another. To take one example, a data broker registered in Texas that only brokers the data of Texas residents would not be legally required to register in California. To take another, a data broker that registered with Vermont in 2020 that then changed its business model and is no longer a broker, would not be required to register in 2025. More detail on variations in data broker laws is outlined in our letters to regulators.
States should investigate compliance with data broker registration requirements, enforce their laws, and plug any loopholes. Ultimately, consumers deserve protections regardless of where they reside, and Congress should also work to pass baseline federal data broker legislation that minimizes collection and includes strict use and disclosure limits, transparency obligations, and consumer rights.
Read more here:
Major Setback for Intermediary Liability in Brazil: Risks and Blind Spots
This is the third post of a series about internet intermediary liability in Brazil. Our first post gives an overview of Brazil's current internet intermediary liability regime, set out in a law known as "Marco Civil da Internet," the context of its approval in 2014, and the beginning of the Supreme Court's judgment of such regime in November 2024. Our second post provides a bigger picture of the Brazilian context underlying the court's analysis and its most likely final decision.
The court’s examination of Marco Civil’s Article 19 began with Justice Dias Toffoli in November last year. We explained here about the cases under trial, the reach of the Supreme Court’s decision, and Article 19’s background related to Marco Civil’s approval in 2014. We also highlighted some aspects and risks of Justice Dias Toffoli’s vote, who considered the intermediary liability regime established in Article 19 unconstitutional.
Most of the justices have agreed to find this regime at least partially unconstitutional, but differ on the specifics. Relevant elements of their votes include:
-
Notice-and-takedown is likely to become the general rule for platforms' liability for third-party content (based on Article 21 of Marco Civil). Justices still have to settle whether this applies to internet applications in general or if some distinctions are relevant, for example, applying only to those that curate or recommend content. Another open question refers to the type of content subject to liability under this rule: votes pointed to unlawful content/acts, manifestly criminal or clearly unlawful content, or opted to focus on crimes. Some justices didn’t explicitly qualify the nature of the restricted content under this rule.
-
If partially valid, the need for a previous judicial order to hold intermediaries liable for user posts (Article 19 of Marco Civil) remains in force for certain types of content (or certain types of internet applications). For some justices, Article 19 should be the liability regime in the case of crimes against honor, such as defamation. Justice Luís Roberto Barroso also considered this rule should apply for any unlawful acts under civil law. Justice Cristiano Zanin has a different approach. For him, Article 19 should prevail for internet applications that don’t curate, recommend or boost content (what he called “neutral” applications) or when there’s reasonable doubt about whether the content is unlawful.
-
Platforms are considered liable for ads and boosted content that they deliver to users. This was the position held by most of the votes so far. Justices did so either by presuming platforms’ knowledge of the paid content they distribute, holding them strictly liable for paid posts, or by considering the delivery of paid content as platforms’ own act (rather than “third-party” conduct). Justice Dias Toffoli went further, including also non-paid recommended content. Some justices extended this regime to content posted by inauthentic or fake accounts, or when the non-identification of accounts hinders holding the content authors liable for their posts.
-
Monitoring duty of specific types of harmful and/or criminal content. Most concerning is that different votes establish some kind of active monitoring and likely automated restriction duty for a list of contents, subject to internet applications' liability. Justices have either recognized a “monitoring duty” or considered platforms liable for these types of content regardless of a previous notification. Justices Luís Roberto Barroso, Cristiano Zanin, and Flávio Dino adopt a less problematic systemic flaw approach, by which applications’ liability would not derive from each piece of content individually, but from an analysis of whether platforms employ the proper means to tackle these types of content. The list of contents also varies. In most of the cases they are restricted to criminal offenses, such as crimes against the democratic state, racism, and crimes against children and adolescents; yet they may also include vaguer terms, like “any violence against women,” as in Justice Dias Toffoli’s vote.
-
Complementary or procedural duties. Justices have also voted to establish complementary or procedural duties. These include providing a notification system that is easily accessible to users, a due process mechanism where users can appeal against content restrictions, and the release of periodic transparency reports. Justice Alexandre de Moraes also specifically mentioned algorithmic transparency measures.
-
Oversight. Justices also discussed which entity or oversight model should be used to monitor compliance while Congress doesn’t approve a specific regulation. They raised different possibilities, including the National Council of Justice, the General Attorney’s Office, the National Data Protection Authority, a self-regulatory body, or a multistakeholder entity with government, companies, and civil society participation.
Three other justices have yet to present their votes to complete the judgment. As we pointed out, the ruling will both decide the individual cases that entered the Supreme Court through appeals and the “general repercussion” issues underlying these individual cases. For addressing such general repercussion issues, the Supreme Court approves a thesis that orients lower court decisions in similar cases. The final thesis will reflect the majority of the court's agreements around the topics we outlined above.
Justice Alexandre de Moraes argued that the final thesis should equate the liability regime of social media and private messaging applications to the one applied to traditional media outlets. This disregards important differences between both: even if social media platforms curate content, it involves a massive volume of third-party posts, mainly organized through algorithms. Although such curation reflects business choices, it does not equate to media outlets that directly create or individually purchase specific content from approved independent producers. This is even more complicated with messaging applications, seriously endangering privacy and end-to-end encryption.
Justice André Mendonça was the only one so far to preserve the full application of Article 19. His proposed thesis highlighted the necessity of safeguarding privacy, data protection, and the secrecy of communications in messaging applications, among other aspects. It also indicated that judicial takedown orders must provide specific reasoning and be made available to platforms, even if issued within a sealed proceeding. The platform must also have the ability to appeal the takedown order. These are all important points the final ruling should endorse.
Risks and Blind SpotsWe have stressed the many problems entangled with broad notice-and-takedown mandates and expanded content monitoring obligations. Extensively relying on AI-based content moderation and tying it to intermediary liability for user content will likely exacerbate the detrimental effects of these systems’ limitations and flaws. The perils and concerns that grounded Article 19's approval remain valid and should have led to a position of the court preserving its regime.
However, given the judgement’s current stage, there are still some minimum safeguards that justices should consider or reinforce to reduce harm.
It’s crucial to put in place guardrails against the abuse and weaponization of notification mechanisms. At a minimum, platforms shouldn’t be liable following an extrajudicial notification when there’s reasonable doubt concerning the content’s lawfulness. In addition, notification procedures should make sure that notices are sufficiently precise and properly substantiated indicating the content’s specific location (e.g. URL) and why the notifier considers it to be illegal. Internet applications must also provide reasoned justification and adequate appeal mechanisms for those who face content restrictions.
On the other hand, holding intermediaries liable for individual pieces of user content regardless of notification, by massively relying on AI-based content flagging, is a recipe for over censorship. Adopting a systemic flaw approach could minimally mitigate this problem. Moreover, justices should clearly set apart private messaging applications, as mandated content-based restrictions would erode secure and end-to-end encrypted implementations.
Finally, we should note that justices generally didn’t distinguish large internet applications from other providers when detailing liability regimes and duties in their votes. This is one major blind spot, as it could significantly impact the feasibility of alternate and decentralized alternatives to Big Tech’s business models, entrenching platform concentration. Similarly, despite criticism of platforms’ business interests in monetizing and capturing user attention, court debates mainly failed to address the pervasive surveillance infrastructure lying underneath Big Tech’s power and abuses.
Indeed, while justices have called out Big Tech’ enormous power over the online flow of information – over what’s heard and seen, and by whom – the consequences of this decision can actually deepen this powerful position.
It’s worth recalling a line of Aaron Schwarz in the film “The Internet’s Own Boy” when comparing broadcasting and the internet. He said: “[…] what you see now is not a question of who gets access to the airwaves, it’s a question of who gets control over the ways you find people.” As he puts it, today’s challenge is less about who gets to speak, but rather about who gets to be heard.
There’s an undeniable source of power in operating the inner rules and structures by which the information flows within a platform with global reach and millions of users. The crucial interventions must aim at this source of power, putting a stop to behavioral surveillance ads, breaking Big Tech’s gatekeeper dominance, and redistributing the information flow.
That’s not to say that we shouldn’t care about how each platform organizes its online environment. We should, and we do. The EU Digital Services Act, for example, established rules in this sense, leaving the traditional liability regime largely intact. Rather than leveraging platforms as users’ speech watchdogs by potentially holding intermediaries liable for each piece of user content, platform accountability efforts should broadly look at platforms’ processes and business choices. Otherwise, we will end up focusing on monitoring users instead of targeting platforms’ abuses.
Major Setback for Intermediary Liability in Brazil: How Did We Get Here?
This is the second post of a series about intermediary liability in Brazil. Our first post gives an overview of Brazil's current intermediary liability regime, the context of its approval in 2014, and the beginning of the Supreme Court's analysis of such regime in November 2024. Our third post provides an outlook on justices' votes up until June 23, underscoring risks, mitigation measures, and blind spots of their potential decision.
The Brazilian Supreme Court has formed a majority to overturn the country’s current online intermediary liability regime. With eight out of eleven justices having presented their opinions, the court has reached enough votes to mostly remove the need for a previous judicial order demanding content takedown to hold digital platforms liable for user posts, which is currently the general rule.
The judgment relates to Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet,” Law n. 12.965/2014), wherein internet applications can only be held liable for third-party content if they fail to comply with a judicial decision ordering its removal. Article 19 aligns with the Manila Principles and reflects the important understanding that holding platforms liable for user content without a judicial analysis creates strong incentives for enforcement overreach and over censorship of protected speech.
Nonetheless, while Justice André Mendonça voted to preserve Article 19’s application, four other justices stated it should prevail only in specific cases, mainly for crimes against honor (such as defamation). The remaining three justices considered that Article 19 offers insufficient protection to constitutional guarantees, such as the integral protection of children and teenagers.
The judgment will resume on June 25th, with the three final justices completing the analysis by the plenary of the court. Whereas Article 19’s partial unconstitutionality (or its interpretation “in accordance with” the Constitution) seems to be the position the majority of the court will take, the details of each vote vary, indicating important agreements still to sew up and critical tweaks to make.
As we previously noted, the outcome of this ruling can seriously undermine free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. This trend could negatively shape developments globally in other courts, parliaments, or with respect to executive powers. Sadly, the votes so far have aggravated these concerns.
But before we get to them, let's look at some circumstances underlying the Supreme Court's analysis.
2014 vs. 2025: The Brazilian Techlash After Marco Civil's ApprovalHow did Article 19 end up (mostly) overturned a decade after Marco Civil’s much-celebrated approval in Brazil back in 2014?
In addition to the broader techlash following the impacts of an increasing concentration of power in the digital realm, developments in Brazil have leveraged a harsher approach towards internet intermediaries. Marco Civil became a scapegoat, especially Article 19, within regulatory approaches that largely diminished the importance of the free expression concerns that informed its approval. Rather than viewing the provision as a milestone to be complemented with new legislation, this context has reinforced the view that Article 19 should be left behind.
The tougher approach to internet intermediaries gained steam after former President Jair Bolsonaro’s election in 2018 and throughout the legislative debates around draft bill 2630, also known as the “Fake News bill.”
Specifically, though not exhaustive, concerns around the spread of disinformation, online-fueled discrimination, and political violence, as well as threats to election integrity, constitute an important piece of this scenario. This includes the use of social media by the far right within the escalation of acts seeking to undermine the integrity of elections and ultimately overthrow the legitimately elected President Luis Inácio da Silva in January 2023. Investigations later unveiled that related plans included killing the new president, the vice-president, and Justice Alexandre de Moraes.
Concerns over child and adolescents’ rights and safety are another part of the underlying context. Among others, a wave of violent threats and actual attacks in schools in early 2023 was bolstered by online content. Social media challenges also led to injuries and deaths of young people.
Finally, the political reactions to Big Tech’s alignment with far-right politicians and feuds with Brazilian authorities complete this puzzle. It includes reactions to Meta’s policy changes in January 2025 and the Trump’s administration’s decision to restrict visas to foreign officials based on grounds of limiting free speech online. This decision is viewed as an offensive against Brazil's Supreme Court from U.S. authorities in alliance with Bolsonaro’s supporters, including his son now living in the U.S.
Changes in the tech landscape, including concerns about the attention-driven information flow, alongside geopolitical tensions, landed in Article 19 examination by the Brazilian Supreme Court. Hurdles in the legislative debate of draft bill 2630 turned attention to the internet intermediary liability cases pending in the Supreme Court as the main vehicles for providing “some” response. Yet, the scope of such cases (explained here) determined the most likely outcome. As they focus on assessing platform liability for user content and whether it involves a duty to monitor, these issues became the main vectors for analysis and potential change. Alternative approaches, such as improving transparency, ensuring due process, and fostering platform accountability through different measures, like risk assessments, were mainly sidelined.
Read our third post in this series to learn more about the analysis of the Supreme Court so far and its risks and blind spots.
Copyright Cases Should Not Threaten Chatbot Users’ Privacy
Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.
The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per day, often for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy.
This isn’t a new concept. Putting users in control of their data is a fundamental piece of privacy protection. Nineteen states, the European Union, and numerous other countries already protect the right to delete under their privacy laws. These rules exist for good reasons: retained data can be sold or given away, breached by hackers, disclosed to law enforcement, or even used to manipulate a user’s choices through online behavioral advertising.
While appropriately tailored orders to preserve evidence are common in litigation, that’s not what happened here. The court disregarded the privacy rights of millions of ChatGPT users without any reasonable basis to believe it would yield evidence. The court granted the order based on unsupported assertions that users who delete their data are probably copyright infringers looking to “cover their tracks.” This is simply false, and it sets a dangerous precedent for cases against generative AI developers and other companies that have vast stores of user information. Unless courts limit orders to information that is actually relevant and useful, they will needlessly violate the privacy rights of millions of users.
OpenAI is challenging this order. EFF urges the court to lift the order and correct its mistakes.
The NO FAKES Act Has Changed – and It’s So Much Worse
A bill purporting to target the issue of misinformation and defamation caused by generative AI has mutated into something that could change the internet forever, harming speech and innovation from here on out.
The Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act aims to address understandable concerns about generative AI-created “replicas” by creating a broad new intellectual property right. That approach was the first mistake: rather than giving people targeted tools to protect against harmful misrepresentations—balanced against the need to protect legitimate speech such as parodies and satires—the original NO FAKES just federalized an image-licensing system.
Tell Congress to Say No to NO FAKES
The updated bill doubles down on that initial mistaken approach by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them, with few safeguards against abuse.
The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”
This bill would be a disaster for internet speech and innovation.
Targeting ToolsThe first version of NO FAKES focused on digital replicas. The new version goes further, targeting tools that can be used to produce images that aren’t authorized by the individual, anyone who owns the rights in that individual’s image, or the law. Anyone who makes, markets, or hosts such tools is on the hook. There are some limits—the tools must be primarily designed for, or have only limited commercial uses other than making unauthorized images—but those limits will offer cold comfort to developers given that they can be targeted based on nothing more than a bare allegation. These provisions effectively give rights-holders the veto power on innovation they’ve long sought in the copyright wars, based on the same tech panics.
Takedown Notices and Filter MandateThe first version of NO FAKES set up a notice and takedown system patterned on the DMCA, with even fewer safeguards. NO FAKES expands it to cover more service providers and require those providers to not only take down targeted materials (or tools) but keep them from being uploaded in the future. In other words, adopt broad filters or lose the safe harbor.
Filters are already a huge problem when it comes to copyright, and at least in that instance all it should be doing is flagging for human review if an upload appears to be a whole copy of a work. The reality is that these systems often flag things that are similar but not the same (like two different people playing the same piece of public domain music). They also flag things for infringement based on mere seconds of a match, and they frequently do not take into account context that would make the use authorized by law.
But copyright filters are not yet required by law. NO FAKES would create a legal mandate that will inevitably lead to hecklers’ vetoes and other forms of over-censorship.
The bill does contain carve outs for parody, satire, and commentary, but those will also be cold comfort for those who cannot afford to litigate the question.
Threats to Anonymous SpeechAs currently written, NO FAKES also allows anyone to get a subpoena from a court clerk—not a judge, and without any form of proof—forcing a service to hand over identifying information about a user.
We've already seen abuse of a similar system in action. In copyright cases, those unhappy with the criticisms being made against them get such subpoenas to silence critics. Often that the criticism includes the complainant's own words as proof of the criticism, an ur-example of fair use. But the subpoena is issued anyway and, unless the service is incredibly on the ball, the user can be unmasked.
Not only does this chill further speech, the unmasking itself can cause harm to users. Either reputationally or in their personal life.
Threats to InnovationMost of us are very unhappy with the state of Big Tech. It seems like not only are we increasingly forced to use the tech giants, but that the quality of their services is actively degrading. By increasing the sheer amount of infrastructure a new service would need to comply with the law, NO FAKES makes it harder for any new service to challenge Big Tech. It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES.
Requiring removal of tools, apps, and services could likewise stymie innovation. For one, it would harm people using such services for otherwise lawful creativity. For another, it would discourage innovators from developing new tools. Who wants to invest in a tool or service that can be forced offline by nothing more than an allegation?
This bill is a solution in search of a problem. Just a few months ago, Congress passed Take It Down, which targeted images involving intimate or sexual content. That deeply flawed bill pressures platforms to actively monitor online speech, including speech that is presently encrypted. But if Congress is really worried about privacy harms, it should at least wait to see the effects of the last piece of internet regulation before going further into a new one. Its failure to do so makes clear that this is not about protecting victims of harmful digital replicas.
NO FAKES is designed to consolidate control over the commercial exploitation of digital images, not prevent it. Along the way, it will cause collateral damage to all of us.
New Journalism Curriculum Module Teaches Digital Security for Border Journalists
SAN FRANCISCO – A new college journalism curriculum module teaches students how to protect themselves and their digital devices when working near and across the U.S.-Mexico border.
“Digital Security 101: Crossing the US-Mexico Border” was developed by Electronic Frontier Foundation (EFF) Director of Investigations Dave Maass and Dr. Martin Shelton, deputy director of digital security at Freedom of the Press Foundation (FPF), in collaboration with the University of Texas at El Paso (UTEP) Multimedia Journalism Program and Borderzine.
The module offers a step-by-step process for improving the digital security of journalists passing through U.S. Land Ports of Entry, focusing on threat modeling: thinking through what you want to protect, and what actions you can take to secure it.
This involves assessing risk according to the kind of work the journalist is doing, the journalist’s own immigration status, potential adversaries, and much more, as well as planning in advance for protecting oneself and one’s devices should the journalist face delay, detention, search, or device seizure. Such planning might include use of encrypted communications, disabling or enabling certain device settings, minimizing the data on devices, and mentally preparing oneself to interact with border authorities.
The module, in development since early 2023, is particularly timely given increasingly invasive questioning and searches at U.S. borders under the Trump Administration and the documented history of border authorities targeting journalists covering migrant caravans during the first Trump presidency.
"Today's journalism students are leaving school only to face complicated, new digital threats to press freedom that did not exist for previous generations. This is especially true for young reporters serving border communities," Shelton said. "Our curriculum is designed to equip emerging journalists with the skills to protect themselves and sources, while this new module is specifically tailored to empower students who must regularly traverse ports of entry at the U.S.-Mexico border while carrying their phones, laptops, and multimedia equipment."
The guidance was developed through field visits to six ports of entry across three border states, interviews with scores of journalists and students from on both sides of the border, and a comprehensive review of CBP policies, while also drawing from EFF and FPF’s combined decades of experience researching constitutional rights and security techniques when it comes to our devices.
“While this training should be helpful to investigative journalists from anywhere in the country who are visiting the borderlands, we put journalism students based in and serving border communities at the center of our work,” Maass said. “Whether you’re reviewing the food scene in San Diego and Tijuana, covering El Paso and Ciudad Juarez’s soccer teams, reporting on family separation in the Rio Grande Valley, or uncovering cross-border corruption, you will need the tools to protect your work and sources."
The module includes a comprehensive slide deck that journalism lecturers can use and remix for their classes, as well as an interactive worksheet. With undergraduate students in mind, the module includes activities such as roleplaying a primary inspection interview and analyzing pop singer Olivia Rodrigo’s harrowing experience of mistaken identity while reentering the country. The module has already been delivered successfully in trainings with journalism students at UTEP and San Diego State University.
“UTEP’s Multimedia Journalism program is well-situated to help develop this digital security training module,” said UTEP Communication Department Chair Dr. Richard Pineda. “Our proximity to the U.S.-Mexico border has influenced our teaching models, and our student population – often daily border crossers – give us a unique perspective from which to train journalists on issues related to reporting safely on both sides of the border.”
For the “Digital security 101: Crossing the US-Mexico border” module: https://freedom.press/digisec/blog/border-security-module/
For more about the module: https://www.eff.org/deeplinks/2025/06/journalist-security-checklist-preparing-devices-travel-through-us-border
For EFF’s guide to digital security at the U.S. border: https://www.eff.org/press/releases/digital-privacy-us-border-new-how-guide-eff
For EFF’s student journalist Surveillance Self Defense guide: https://ssd.eff.org/playlist/journalism-student
Contact: DaveMaassDirector of Investigationsdm@eff.orgA Journalist Security Checklist: Preparing Devices for Travel Through a US Border
This post was originally published by the Freedom of the Press Foundation (FPF). This checklist complements the recent training module for journalism students in border communities that EFF and FPF developed in partnership with the University of Texas at El Paso Multimedia Journalism Program and Borderzine. We are cross-posting it under FPF's Creative Commons Attribution 4.0 International license. It has been slightly edited for style and consistency.
Before diving in: This space is changing quickly! Check FPF's website for updates and contact them with questions or suggestions. This is a joint project of Freedom of the Press Foundation (FPF) and the Electronic Frontier Foundation.
Those within the U.S. have Fourth Amendment protections against unreasonable searches and seizures — but there is an exception at the border. Customs and Border Protection (CBP) asserts broad authority to search travelers’ devices when crossing U.S. borders, whether traveling by land, sea, or air. And unfortunately, except for a dip at the start of the COVID-19 pandemic when international travel substantially decreased, CBP has generally searched more devices year over year since the George W. Bush administration. While the percentage of travelers affected by device searches remains small, in recent months we’ve heard growing concerns about apparent increased immigration scrutiny and enforcement at U.S. ports of entry, including seemingly unjustified device searches.
Regardless, it’s hard to say with certainty the likelihood that you will experience a search of your items, including your digital devices. But there’s a lot you can do to lower your risk in case you are detained in transit, or if your devices are searched. We wrote this checklist to help journalists prepare for transit through a U.S. port of entry while preserving the confidentiality of your most sensitive information, such as unpublished reporting materials or source contact information. It’s important to think about your strategy in advance, and begin planning which options in this checklist make sense for you.
First thing’s first: What might CBP do?U.S. CBP’s policy is that they may conduct a “basic” search (manually looking through information on a device) for any reason or no reason at all. If they feel they have reasonable suspicion “of activity in violation of the laws enforced or administered by CBP” or if there is a “national security concern,” they may conduct what they call an “advanced” search, which may include connecting external equipment to your device, such as a forensic analysis tool designed to make a copy of your data.
Your citizenship status matters as to whether you can refuse to comply with a request to unlock your device or provide the passcode. If you are a U.S. citizen entering the U.S., you have the most legal leverage to refuse to comply because U.S. citizens cannot be denied entry — they must be let back into the country. But note that if you are a U.S. citizen, you may be subject to escalated harassment and further delay at the port of entry, and your device may be seized for days, weeks, or months.
If CBP officers seek to search your locked device using forensic tools, there is a chance that some (if not all of the) information on the device will be compromised. But this probability depends on what tools are available to government agents at the port of entry, if they are motivated to seize your device and send it elsewhere for analysis, and what type of device, operating system, and security features your device has. Thus, it is also possible that strong encryption may substantially slow down or even thwart a government device search.
Lawful permanent residents (green-card holders) must generally also be let back into the country. However, the current administration seems more willing to question LPR status, so refusing to comply with a request to unlock a device or provide a passcode may be risky for LPRs. Finally, CBP has broad discretion to deny entry to foreign nationals arriving on a visa or via the visa waiver program.
At present, traveling domestically within the United States, particularly if you are a U.S. citizen, is lower risk than travelling internationally. Our luggage and the physical aspects of digital devices may be searched — e.g., manual inspection or x-rays to ensure a device is not a bomb. CBP is often present at airports, but for domestic travel within the U.S. you should only be interacting with the Transportation Security Administration. TSA does not assert authority to search the data on your device — this is CBP’s role.
At an international airport or other port of entry, you have to decide whether you will comply with a request to access your device, but this might not feel like much of a choice if you are a non-U.S. citizen entering the country! Plan accordingly.
Your border digital security checklist Preparing for travel☐ Make a backup of each of your devices before traveling.
☐ Use long, unpredictable, alphanumeric passcodes for your devices and commit those passwords to memory.
☐ If bringing a laptop, ensure it is encrypted using BitLocker for Windows, or FileVault for macOS. Chromebooks are encrypted by default. A password-protected laptop screen lock is usually insufficient. When going through security, devices should be turned all the way off.
☐ Fully update your device and apps.
☐ Optional: Use a password manager to help create and store randomized passcodes. 1Password users can create temporary travel vaults.
☐ Bring as few sensitive devices as possible — only what you need.
☐ Regardless which country you are visiting, think carefully about what you are willing to post publicly on social media about that country to avoid scrutiny.
☐ For land ports of entry in the U.S., check CBP’s border wait times and plan accordingly.
☐ If possible, print out any travel documents in advance to avoid the necessity to unlock your phone during boarding, including boarding passes for your departure and return, rental car information, and any information about your itinerary that you would like to have on hand if questioned (e.g., hotel bookings, visa paperwork, employment information if applicable, conference information). Use a printer you trust at home or at the office, just in case.
☐ Avoid bringing sensitive physical documents you wouldn’t want searched. If you need them, consider digitizing them (e.g., by taking a photo) and storing them remotely on a cloud service or backup device.
Decide in advance whether you will unlock your device or provide the passcode for a search. Your overall likelihood of experiencing a device search is low (e.g., less than .01% of international travelers are selected), but depending on what information you carry, the impact of a search may be quite high. If you plan to unlock your device for a search or provide the passcode, ensure your devices are prepared:
☐ Upload any information you would like to keep in cloud providers in advance (e.g., using iCloud) that you would like stored remotely, instead of locally on your device.
☐ Remove any apps, files, chat histories, browsing histories, and sensitive contacts you would not want exposed during a search.
☐ If you delete photos or files, delete them a second time in the “Recently Deleted” or “Trash” sections of your Files and Photos apps.
☐ Remove messages from the device that you believe would draw unwanted scrutiny. Remove yourself — even if temporarily — from chat groups on platforms like Signal.
☐ If you use Signal and plan to keep it on your device, use disappearing messages to minimize how much information you keep within the app.
☐ Optional: Bring a travel device instead of your usual device. Ensure it is populated with the apps you need while traveling, as well as login credentials (e.g., stored in a password manager), and necessary files. If you do this, ensure your trusted contacts know how to reach you on this device.
☐ Optional: Rather than manually removing all sensitive files from your computer, if you are primarily accessing web services during your travels, a Chromebook may be an affordable alternative to your regular computer.
☐ Optional: After backing up your devices for every day use, factory reset it and add only the information you need back onto the device.
☐ Optional: If you intend to work during your travel, plan in advance with a colleague who can remotely assist you in accessing and/or rotating necessary credentials.
☐ If you don’t plan to work, consider discussing with your IT department whether temporarily suspending your work accounts could mitigate risks at border crossings.
☐ Log out of accounts you do not want accessible to border officials. Note that border officers do not have authority to access live cloud content — they must put devices in airplane mode or otherwise disconnect them from the internet.
☐ Power down your phone and laptop entirely before going through security. This will enable disk encryption, and make it harder for someone to analyze your device.
☐ Immediately before travel, if you have a practicing attorney who has expertise in immigration and border issues, particularly related to members of the media, make sure you have their contact information written down before visiting.
☐ Immediately before travel, ensure that a friend, relative, or colleague is aware of your whereabouts when passing through a port of entry, and provide them with an update as soon as possible afterward.
☐ Be polite and try not to emotionally escalate the situation.
☐ Do not lie to border officials, but don’t offer any information they do not explicitly request.
☐ Politely request officers’ names and badge numbers.
☐ If you choose to unlock your device, rather than telling border officials your passcode, ask to type it in yourself.
☐ Ask to be present for a search of your device. But note officers are likely to take your device out of your line of sight.
☐ You may decline the request to search your device, but this may result in your device being seized and held for days, weeks, or months. If you are not a U.S. citizen, refusal to comply with a search request may lead to denial of entry, or scrutiny of lawful permanent resident status.
☐ If your device is seized, ask for a custody receipt (Form 6051D). This should also list the name and contact information for a supervising officer.
☐ If an officer has plugged your unlocked phone or computer into another electronic device, they may have obtained a forensic copy of your device. You will want to remember anything you can about this event if it happens.
☐ Immediately afterward, write down as many details as you can about the encounter: e.g., names, badge numbers, descriptions of equipment that may have been used to analyze the device, changes to the device or corrupted data, etc.
Reporting is not a crime. Be confident knowing you haven’t done anything wrong.
More resources- https://hselaw.com/news-and-information/legalcurrents/preparing-for-electronic-device-searches-at-united-states-borders/
- https://www.eff.org/wp/digital-privacy-us-border-2017#main-content
- https://www.aclu.org/news/privacy-technology/can-border-agents-search-your-electronic
- https://www.theverge.com/policy/634264/customs-border-protection-search-phone-airport-rights
- https://www.wired.com/2017/02/guide-getting-past-customs-digital-privacy-intact/
- https://www.washingtonpost.com/technology/2025/03/27/cbp-cell-phones-devices-traveling-us/
EFF to European Commission: Don’t Resurrect Illegal Data Retention Mandates
The mandatory retention of metadata is an evergreen of European digital policy. Despite a number of rulings by Europe’s highest court, confirming again and again the incompatibility of general and indiscriminate data retention mandates with European fundamental rights, the European Commission is taking major steps towards the re-introduction of EU-wide data retention mandates. Recently, the Commission launched a Call for Evidence on data retention for criminal investigations—the first formal step towards a legislative proposal.
The European Commission and EU Member States have been attempting to revive data retention for years. For this purpose, a secretive “High Level Group on Access to Data for Effective Law Enforcement” has been formed, usually referred to as High level Group (HLG) “Going dark”. Going dark refers to the false narrative that law enforcement authorities are left “in the dark” due to a lack of accessible data, despite the ever increasing collection and accessing of data through companies, data brokers and governments. Going dark also describes the intransparent ways of working of the HLG, behind closed doors and without input from civil society.
The Groups’ recommendations to the European Commission, published in 2024, read like a wishlist of government surveillance.They include suggestions to backdoors in various technologies (reframed as “lawful access by design”), obligations on service providers to collect and retain more user data than they need for providing their services, and intercepting and providing decrypted data to law enforcement in real time, all the while avoiding to compromise the security of their systems. And of course, the HLG calls for a harmonized data retention regime, including not only the retention of but also the access to data, and extending data retention to any service provider that could provide access to data.
EFF joined other civil society organizations in addressing the dangerous proposals of the HLG, calling on the European Commission to safeguard fundamental rights and ensuring the security and confidentiality of communication.
In our response to the Commission's Call for Evidence, we reiterated the same principles.
- Any future legislative measures must prioritize the protection of fundamental rights and must be aligned with the extensive jurisprudence of the Court of Justice of the European Union.
- General and indiscriminate data retention mandates undermine anonymity and privacy, which are essential for democratic societies, and pose significant cybersecurity risks by creating centralized troves of sensitive metadata that are attractive targets for malicious actors.
- We highlight the lack of empirical evidence to justify blanket data retention and warn against extending retention duties to number-independent interpersonal communication services as it would violate CJEU doctrine, conflict with European data protection law, and compromise security.
The European Commission must once and for all abandon the ghost of data retention that’s been haunting EU policy discussions for decades, and shift its focus to rights respecting alternatives.
Protect Yourself From Meta’s Latest Attack on Privacy
Researchers recently caught Meta using an egregious new tracking technique to spy on you. Exploiting a technical loophole, the company was able to have their apps snoop on users’ web browsing. This tracking technique stands out for its flagrant disregard of core security protections built into phones and browsers. The episode is yet another reason to distrust Meta, block web tracking, and end surveillance advertising.
Fortunately, there are steps that you, your browser, and your government can take to fight online tracking.
What Makes Meta’s New Tracking Technique So Problematic?More than 10 years ago, Meta introduced a snippet of code called the “Meta pixel,” which has since been embedded on about 20% of the most trafficked websites. This pixel exists to spy on you, recording how visitors use a website and respond to ads, and siphoning potentially sensitive info like financial information from tax filing websites and medical information from hospital websites, all in service of the company’s creepy system of surveillance-based advertising.
While these pixels are well-known, and can be blocked by tools like EFF’s Privacy Badger, researchers discovered another way these pixels were being used to track you.
Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified
Meta’s tracking pixel was secretly communicating with Meta’s apps on Android devices. This violates a fundamental security feature (“sandboxing”) of mobile operating systems that prevents apps from communicating with each other. Meta got around this restriction by exploiting localhost, a feature meant for developer testing. This allowed Meta to create a hidden channel between mobile browser apps and its own apps. You can read more about the technical details here.
This workaround helped Meta bypass user privacy protections and attempts at anonymity. Typically, Meta tries to link data from “anonymous” website visitors to individual Meta accounts using signals like IP addresses and cookies. But Meta made re-identification trivial with this new tracking technique by sending information directly from its pixel to Meta's apps, where users are already logged in. Even users who blocked or cleared cookies, hid their IP address with a VPN, or browsed in incognito mode could be identified with this tracking technique.
Meta didn’t just hide this tracking technique from users. Developers who embedded Meta’s tracking pixels on their websites were also kept in the dark. Some developers noticed the pixel contacting localhost from their websites, but got no explanation when they raised concerns to Meta. Once publicly exposed, Meta immediately paused this tracking technique. They claimed they were in discussions with Google about “a potential miscommunication regarding the application of their policies.”
While the researchers only observed the practice on Android devices, similar exploits may be possible on iPhones as well.
This exploit underscores the unique privacy risks we face when Big Tech can leverage out of control online tracking to profit from our personal data.
How Can You Protect Yourself?Meta seems to have stopped using this technique for now, but that doesn’t mean they’re done inventing new ways to track you. Here are a few steps you can take to protect yourself:
Use a Privacy-Focused Browser
Choose a browser with better default privacy protections than Chrome. For example, Brave and DuckDuckGo protected users from this tracking technique because they block Meta’s tracking pixel by default. Firefox only partially blocked the new tracking technique with its default settings, but fully blocked it for users with “Enhanced Tracking Protection” set to “Strict.”
It’s also a good idea to avoid using in-app browsers. When you open links inside the Facebook or Instagram apps, Meta can track you more easily than if you opened the same links in an external browser.
Delete Unnecessary Apps
Reduce the number of ways your information can leak by deleting apps you don’t trust or don’t regularly use. Try opting for websites over apps when possible. In this case, and many similar cases, using the Facebook and Instagram website instead of the apps would have limited data collection. Even though both can contain tracking code, apps can access information that websites generally can’t, like a persistent “advertising ID” that companies use to track you (follow EFF’s instructions to turn it off if you haven’t already).
Install Privacy Badger
EFF’s free browser extension blocks trackers to stop companies from spying on you online. Although Privacy Badger would’ve stopped Meta’s latest tracking technique by blocking their pixel, Firefox for Android is the only mobile browser it currently supports. You can install Privacy Badger on Chrome, Firefox, and Edge on your desktop computer.
Limit Meta’s Use of Your Data
Meta’s business model creates an incentive to collect as much information as possible about people to sell targeted ads. Short of deleting your accounts, you have a number of options to limit tracking and how the company uses your data.
How Should Google Chrome Respond?After learning about Meta’s latest tracking technique, Chrome and Firefox released fixes for the technical loopholes that Meta exploited. That’s an important step, but Meta’s deliberate attempt to bypass browsers’ privacy protections shows why browsers should do more to protect users from online trackers.
Unfortunately, the most popular browser, Google Chrome, is also the worst for your privacy. Privacy Badger can help by blocking trackers on desktop Chrome, but Chrome for Android doesn’t support browser extensions. That seems to be Google’s choice, rather than a technical limitation. Given the lack of privacy protections they offer, Chrome should support extensions on Android to let users protect themselves.
Although Chrome addressed the latest Meta exploit after it was exposed, their refusal to block third-party cookies or known trackers leaves the door wide open for Meta’s other creepy tracking techniques. Even when browsers block third-party cookies, allowing trackers to load at all gives them other ways to harvest and de-anonymize users’ data. Chrome should protect its users by blocking known trackers (including Google’s). Tracker-blocking features in Safari and Firefox show that similar protections are possible and long overdue in Chrome. It has yet to be approved to ship in Chrome, but a Google proposal to block fingerprinting scripts in Incognito Mode is a promising start.
Yet Another Reason to Ban Online Behavioral AdvertisingMeta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. Even if this method has been paused, as long as they have the incentive to do so Meta will keep finding ways to bypass your privacy protections.
The best way to stop this cycle of invasive tracking techniques and patchwork fixes is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. We need strong federal privacy laws to ensure that you, not Meta, control what information you share online.
A Token of Appreciation for Sustaining Donors 💞
You'll get a custom EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.
Start a Convenient recurring donation Today!
But here's a little more background for all of you detail-oriented digital rights fans. EFF's 35th Anniversary celebration has begun and we're commemorating three and a half decades for fighting for your privacy, security, and free expression rights online. These values are hallmarks of freedom and necessities for true democracy, and you can help protect them. It's only possible with the kindness and steadfast support from EFF members, and over 30% of them are Sustaining Donors: people who spread out their support with a monthly or annual automatic recurring donation.
We're saying thanks to new and upgrading Sustaining Donors by offering brand new EFF35 Challenge Coins as a literal token of thanks. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and we owe our strength to tech creators and users like you. EFF challenge coins are individually numbered for each supporter and only available while supplies last.
Become a Sustaining DonorJust start an automated recurring donation of at least $5 per month (Copper Level) or $25 per year (Silicon Level) by July 10, 2025. We'll automatically send a special-edition EFF challenge coin to the shipping address you provide during your transaction.
Already a Monthly or Annual Sustaining Donor?First of all—THANKS! Second, you can get an EFF35 Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift by any amount and let us know by emailing upgrade@eff.org.
Get started with your upgrade at eff.org/recurring. If you used PayPal, just cancel your current recurring donation and then go to eff.org to start a new upgraded recurring donation.
Digital Rights Every DayEFF's mission is sustained by thousands of people from every imaginable background giving modest donations when they can. Every cent counts. We like to show our gratitude and give you something to start conversations about civil liberties and human rights, whether you're a one time donor or recurring Sustaining Donor.
Check out freshly-baked member gifts made for EFF's anniversary year including new EFF35 Cityscape T-Shirt, Motherboard Hooded Sweatshirt, and new stickers. With your help, EFF is here to stay.
Strategies for Resisting Tech-Enabled Violence Facing Transgender People
Today's Supreme Court’s ruling in U.S. v. Skrmetti upholding bans on gender-affirming care for youth makes it clear: trans people are under attack. Threats to trans rights and healthcare are coming from legislatures, anti-trans bigots (both organized and not), apathetic bystanders, and more. Living under the most sophisticated surveillance apparatus in human history only makes things worse. While the dangers are very much tangible and immediate, the risks posed by technology can amplify them in insidious ways. Here is a non-exhaustive overview of concerns, a broad-sweeping threat model, and some recommended strategies that you can take to keep yourself and your loved ones safe.
Dangers for Trans YouthTrans kids experience an inhumane amount of cruelty and assault. Much of today’s anti-trans legislation is aimed specifically at making life harder for transgender youth, across all aspects of life. For this reason, we have highlighted several of the unique threats facing transgender youth.
School Monitoring SoftwareMost school-issued devices are root-kitted with surveillance spyware known as student-monitoring software. The purveyors of these technologies have been widely criticized for posing significant risks to marginalized children, particularly LGBTQ+ students. We ran our own investigation on the dangers posed by these technologies with a project called Red Flag Machine. Our findings showed that a significant portion of the times students’ online behavior was flagged as “inappropriate” was when they were researching LGBTQ+ topics such as queer history, sexual education, psychology, and medicine. When a device with this software flags such activity it often leads to students being placed in direct contact with school administrators or even law enforcement. As I wrote 3 years ago, this creates a persistent and uniquely dangerous situation for students living in areas with regressive laws around LGBTQ+ life or unsafe home environments.
The risks posed by technology can amplify threats in insidious ways
Unfortunately, because of the invasive nature of these school-issued devices, we can’t recommend a safe way to research LGBTQ+ topics on them without risking school administrators finding out. If possible, consider compartmentalizing those searches to different devices, ones owned by you or a trusted friend, or devices found in an environment you trust, such as a public library.
Family Owned DevicesIf you don’t own your phone, laptop, or other devices—such as if your parents or guardians are in control of them (e.g. they have access to unlock them or they exert control over the app stores you can access with them)— it’s safest to treat those devices as you would a school-issued device. This means you should not trust those devices for the most sensitive activities or searches that you want to keep especially private. While steps like deleting browser history and using hidden folders or photo albums can offer some safety, they aren’t sure-fire protections to prevent the adults in your life from accessing your sensitive information. When possible, try using a public library computer (outside of school) or borrow a trusted friend’s device with fewer restrictions.
Dangers for ProtestorsPride demonstrations are once again returning to their roots as political protests. It’s important to treat them as such by locking down your devices and coming up with some safety plans in advance. We recommend reading our entire Surveillance Self-Defense guide on attending a protest, taking special care to implement strategies like disabling biometric unlock on your phone and documenting the protest without putting others at risk. If you’re attending the demonstration with others–which is strongly encouraged–consider setting up a Signal group chat and using strategies laid out in this blog post by Micah Lee.
Counter-protestorsThere is a significant push from anti-trans bigots to make Pride month more dangerous for our community. An independent source has been tracking and mapping anti-trans organized groups who are specifically targeting Pride events. While the list is non-exhaustive, it does provide some insight into who these groups are and where they are active. If one of these groups is organizing in your area, it will be important to take extra precautions to keep yourself safe.
Data Brokers & Open-Source IntelligenceData brokers pose a significant threat to everyone–and frankly, the entire industry deserves to be deleted out of existence. The dangers are even more pressing for people doing the vital work advocating for human rights of transgender people. If you’re a doctor, an activist, or a supportive family member of a transgender person, you are at risk of your own personal information being weaponized against you. Anti-trans bigots and their supporters online will routinely access open-source intelligence and data broker records to cause harm.
You can reduce some of these risks by opting out from data brokers. It’s not a cure-all (the entire dissolution of the data broker industry is the only solution), but it’s a meaningful step. The DIY method has been found most effective, though there are services to automate the process if you would rather save yourself the time and energy. For the DIY approach, we recommend using Yael Grauer’s Big Ass Data-Broker Opt Out List.
Legality is likely to continue to shift
It’s also important to look into other publicly accessible information that may be out there, including voter registration records, medical licensing information, property sales records, and more. Some of these can be obfuscated through mechanisms like “address confidentiality programs.” These protections vary state-by-state, so we recommend checking your local laws and protections.
Medical DataIn recent years, legislatures across the country have moved to restrict access to and ban transgender healthcare. Legality is likely to continue to shift, especially after the Supreme Court’s green light today in Skrmetti. Many of the concerns around criminalization of transgender healthcare overlap with those surrounding abortion access –issues that are deeply connected and not mutually exclusive. The Surveillance Self-Defense playlist for the abortion access movement is a great place to start when thinking through these risks, particularly the guides on mobile phone location tracking, making a security plan, and communicating with others. While some of this overlaps with the previously linked protest safety guides, that redundancy only underscores the importance.
Unfortunately, much of the data about your medical history and care is out of your hands. While some medical practitioners may have some flexibility over how your records reflect your trans identity, certain aspects like diagnostic codes and pharmaceutical data for hormone therapy or surgery are often more rigid and difficult to obscure. As a patient, it’s important to consult with your medical provider about this information. Consider opening up a dialogue with them about what information needs to be documented, versus what could be obfuscated, and how you can plan ahead in the event that this type of care is further outlawed or deemed criminal.
Account Safety Locking Down Social Media AccountsIt’s a good idea for everyone to review the privacy and security settings on their social media accounts. But given the extreme amount of anti-trans hate online (sometimes emboldened by the very platforms themselves), this is a necessary step for trans people online. To start, check out the Surveillance Self-Defense guide on social media account safety.
We can’t let the threats posed by technology diminish our humanity and our liberation.
In addition to reviewing your account settings, you may want to think carefully about what information you choose to share online. While visibility of queerness and humanity is a powerful tool for destigmatizing our existence, only you can decide if the risk involved with sharing your face, your name, and your life outweigh the benefit of showing others that no matter what happens, trans people exist. There’s no single right answer—only what’s right for you.
Keep in mind also that LGBTQ expression is at significantly greater risk of censorship by these platforms. There is little individuals can do to fully evade or protect against this, underscoring the importance of advocacy and platform accountability.
Dating AppsDating apps also pose a unique set of risks for transgender people. Intimate partner violence for transgender people is at a staggeringly high rate compared to cisgender people–meaning we must take special care to protect ourselves. This guide on LGBTQ dating app safety is worth reading, but here’s the TLDR: always designate a friend as your safety contact before and after meeting anyone new, meet in public first, and be mindful of how you share photos with others on dating apps.
Safety and Liberation Are Collective EffortsWhile bodily autonomy is under attack from multiple fronts, it’s crucial that we band together to share strategies of resistance. Digital privacy and security must be considered when it comes to holistic security and safety. Don’t let technology become the tool that enables violence or restricts the self-determination we all deserve.
Trans people have always existed. Trans people will continue to exist despite the state’s efforts to eradicate us. Digital privacy and security are just one aspect of our collective safety. We can’t let the threats posed by technology diminish our humanity and our liberation. Stay informed. Fight back. We keep each other safe.
Apple to Australians: You’re Too Stupid to Choose Your Own Apps
Apple has released a scaremongering, self-serving warning aimed at the Australian government, claiming that Australians will be overrun by a parade of digital horribles if Australia follows the European Union’s lead and regulates Apple’s “walled garden.”
The EU’s Digital Markets Act is a big, complex, ambitious law that takes aim squarely at the source of Big Tech’s power: lock-in. For users, the DMA offers interoperability rules that let Europeans escape US tech giants’ walled gardens without giving up their relationships and digital memories.
For small businesses, the DMA offers something just as valuable: the right to process their own payments. That may sound boring, but here’s the thing: Apple takes 30 percent commission on most payments made through iPhone and iPad apps, and they ban app makers from including alternative payment methods or even mentioning that Apple customers can make their payments on the web.
All this means that every euro a European Patreon user sends to a performer or artist takes a round-trip through Cupertino, California, and comes back 30 cents lighter. Same goes for other money sent to major newspapers, big games, or large service providers. Meanwhile, the actual cost of processing a payment in the EU is less than one percent, meaning that Apple is taking in a 3,000 percent margin on its EU payments.
To make things worse, Apple uses “digital rights management” to lock iPhones and iPads to its official App Store. That means that Europeans can’t escape Apple’s 30 percent “app tax” by installing apps from a store with fairer payment policies.
Here, too, the DMA offers relief, with a rule that requires Apple to permit “sideloading” of apps (that is, installing apps without using an app store). The same rule requires Apple to allow its customers to choose to use independent app stores.
With the DMA, the EU is leading the world in smart, administrable tech policies that strike at the power of tech companies. This is a welcome break from the dominant approach to tech policy over the first two decades of this century, in which regulators focused on demanding that tech companies use their power wisely – by surveilling and controlling their users to prevent bad behavior – rather than taking that power away.
Which is why Australia is so interested. A late 2024 report from the Australian Treasury took a serious look at transposing DMA-style rules to Australia. It’s a sound policy, as the European experience has shown.
But you wouldn’t know it by listening to Apple. According to Apple, Australians aren’t competent to have the final say over which apps they use and how they pay for them, and only Apple can make those determinations safely. It’s true that Apple sometimes takes bold, admirable steps to protect its customers’ privacy – but it’s also true that sometimes Apple invades its customers’ privacy (and lies about it). It’s true that sometimes Apple defends its customers from government spying – but it’s also true that sometimes Apple serves its customers up on a platter to government spies, delivering population-scale surveillance for autocratic regimes (and Apple has even been known to change its apps to help autocrats cling to power).
Apple sometimes has its customers’ backs, but often, it sides with its shareholders (or repressive governments) over those customers. There’s no such thing as a benevolent dictator: letting Apple veto your decisions about how you use your devices will not make you safer.
Apple’s claims about the chaos and dangers that Europeans face thanks to the DMA are even more (grimly) funny when you consider that Apple has flouted EU law with breathtaking acts of malicious compliance. Apparently, the European iPhone carnage has been triggered by the words on the European law books, without Apple even having to follow those laws!
The world is in the midst of a global anti-monopoly wave that keeps on growing. This decade has seen big, muscular antitrust action in the US, the UK, the EU, Canada, South Korea, Japan, Germany, Spain, France, and even China.
It’s been a century since the last wave of trustbusting swept the globe, and while today’s monopolists are orders of magnitude larger than their early 20th-century forbears, they also have a unique vulnerability.
Broadly speaking, today’s tech giants cheat in the same way everywhere. They do the same spying, the same price-gouging, and employ the same lock-in tactics in every country where they operate, which is practically every country. That means that when a large bloc like the EU makes a good tech regulation, it has the power to ripple out across the planet, benefiting all of us – like when the EU forced Apple to switch to standard USB-C cables to charge their devices, and we all got iPhones with USB-C ports.
It makes perfect sense for Australia to import the DMA – after all, Apple and other American tech companies run the same scams on Australians as they do on Europeans.
Around the world, antitrust enforcers have figured out that they can copy one another’s homework, to the benefit of the people they defend. For example, in 2022, the UK’s Digital Markets Unit published a landmark study on the abuses of the mobile duopoly. The EU Commission relied on the UK report when it crafted the DMA, as did an American Congressman who introduced a similar law that year. The same report’s findings became the basis for new enforcement efforts in Japan and South Korea.
As Benjamin Franklin wrote, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening mine.” It’s wonderful to see Australian regulators picking up best practices from the EU, and we look forward to seeing what ideas Australia has for the rest of the world to copy.