EFF: Updates
What Europe’s New Gig Work Law Means for Unions and Technology
At EFF, we believe that tech rights are worker’s rights. Since the pandemic, workers of all kinds have been subjected to increasingly invasive forms of bossware. These are the “algorithmic management” tools that surveil workers on and off the job, often running on devices that (nominally) belong to workers, hijacking our phones and laptops. On the job, digital technology can become both a system of ubiquitous surveillance and a means of total control.
Enter the EU’s Platform Work Directive (PWD). The PWD was finalized in 2024, and every EU member state will have to implement (“transpose”) it by 2026. The PWD contains far-reaching measures to protect workers from abuse, wage theft, and other unfair working conditions.
But the PWD isn’t self-enforcing! Over the decades that EFF has fought for user rights, we’ve proved that having a legal right on paper isn’t the same as having that right in the real world. And workers are rarely positioned to take on their bosses in court or at a regulatory body. To do that, they need advocates.
That’s where unions come in. Unions are well-positioned to defend their members – and all workers (EFF employees are proudly organized under the International Federation of Professional and Technical Engineers).
The European Trade Union Confederation has just published “Negotiating the Algorithm,” a visionary – but detailed and down-to-earth – manual for unions seeking to leverage the PWD to protect and advance workers’ interests in Europe.
The report notes the alarming growth of algorithmic management, with 79% of European firms employing some form of bossware. Report author Ben Wray enumerates many of the harms of algorithmic management, such as “algorithmic wage discrimination,” where each worker is offered a different payscale based on surveillance data that is used to infer how economically desperate they are.
Algorithmic management tools can also be used for wage theft, for example, by systematically undercounting the distances traveled by delivery drivers or riders. These tools can also subject workers to danger by penalizing workers who deviate from prescribed tasks (for example, when riders are downranked for taking an alternate route to avoid a traffic accident).
Gig workers live under the constant threat of being “deactivated” (kicked off the app) and feel pressure to do unpaid work for clients who can threaten their livelihoods with one-star reviews. Workers also face automated de-activation: a whole host of “anti-fraud” tripwires can see workers de-activated without appeal. These risks do not befall all workers equally: Black and brown workers face a disproportionate risk of de-activation when they fail facial recognition checks meant to prevent workers from sharing an account (facial recognition systems make more errors when dealing with darker skin tones).
Algorithmic management is typically accompanied by a raft of cost-cutting measures, and workers under algorithmic management often find that their employer’s human resources department has been replaced with chatbots, web-forms, and seemingly unattended email boxes. When algorithmic management goes wrong, workers struggle to reach a human being who can hear their appeal.
For these reasons and more, the ETUC believes that unions need to invest in technical capacity to protect workers’ interests in the age of algorithmic management.
The report sets out many technological activities that unions can get involved with. At the most basic level, unions can invest in developing analytical capabilities, so that when they request logs from algorithmic management systems as part of a labor dispute, they can independently analyze those files.
But that’s just table-stakes. Unions should also consider investing in “counter apps” that help workers. There are workers that act as an external check on employers’ automation, like the UberCheats app, which double-checked the mileage that Uber drivers were paid for. There are apps that enable gig workers to collectively refuse lowball offers, raising the prevailing wage for all the workers in a region, such as the Brazilian StopClub app. Indonesian gig riders have a wide range of “tuyul” apps that let them modify the functionality of their dispatch apps. We love this kind of “adversarial interoperability.” Any time the users of technology get to decide how it works, we celebrate. And in the US, this sort of tech-enabled collective action by workers is likely to be shielded from antitrust liability even if the workers involved are classified as independent contractors.
Developing in-house tech teams also gives unions the know-how to develop the tools for organizers and workers to coordinate their efforts to protect workers. The report acknowledges that this is a lot of tech work to ask individual unions to fund, and it moots the possibility of unions forming cooperative ventures to do this work for the unions in the co-op. At EFF, we regularly hear from skilled people who want to become public interest technologists, and we bet there’d be plenty of people who’d jump at the chance to do this work.
The new Platform Work Directive gives workers and their representatives the right to challenge automated decision-making, to peer inside the algorithms used to dispatch and pay workers, to speak to a responsible human about disputes, and to have their privacy and other fundamental rights protected on the job. It represents a big step forward for workers’ rights in the digital age.
But as the European Trade Union Confederation’s report reminds us, these rights are only as good as workers’ ability to claim them. After 35 years of standing up for people’s digital rights, we couldn’t agree more.
Tile’s Lack of Encryption Is a Danger for Users Everywhere
In research shared with Wired this week, security researchers detailed a series of vulnerabilities and design flaws with Life360’s Tile Bluetooth trackers that make it easy for stalkers and the company itself to track the location of Tile devices.
Tile trackers are small Bluetooth trackers, similar to Apple’s Airtags, but they work on their own network, not Apple’s. We’ve been raising concerns about these types of trackers since they were first introduced and provide guidance for finding them if you think someone is using them to track you without your knowledge.
EFF has worked on improving the Detecting Unwanted Location Trackers standard that Apple, Google, and Samsung use, and these companies have at least made incremental improvements. But Tile has done little to mitigate the concerns we’ve raised around stalkers using their devices to track people.
One of the core fundamentals of that standard is that Bluetooth trackers should rotate their MAC address, making them harder for a third-party to track, and that they should encrypt information sent. According to the researchers, Tile does neither.
This has a direct impact on the privacy of legitimate users and opens the device up to potentially even more dangerous stalking. Tile devices do have a rotating ID, but since the MAC address is static and unencrypted, anyone in the vicinity could pick up and track that Bluetooth device.
Other Bluetooth trackers don’t broadcast their MAC address, and instead use only a rotating ID, which makes it much harder for someone to record and track the movement of that tag. Apple, Google, and Samsung also all use end-to-end encryption when data about the location is sent to the companies’ servers, meaning the companies themselves cannot access that information.
In its privacy policy, Life360 states that, “You are the only one with the ability to see your Tile location and your device location.” But if the information from a tracker is sent to and stored by Tile in cleartext (i.e. unencrypted text) as the researchers believe, then the company itself can see the location of the tags and their owners, turning them from single item trackers into surveillance tools.
There are also issues with the “anti-theft mode” that Tile offers. The anti-theft setting hides the tracker from Tile’s “Scan and Secure” detection feature, so it can’t be easily found using the app. Ostensibly this is a feature meant to make it harder for a thief to just use the app to locate a tracker. In exchange for enabling the anti-theft feature, a user has to submit a photo ID and agree to pay a $1 million fine if they’re convicted of misusing the tracker.
But that’s only helpful if the stalker gets caught, which is a lot less likely when the person being tracked can’t use the anti-stalking protection feature in the app to find the tracker following them. As we’ve said before, it is impossible to make an anti-theft device that secretly notifies only the owner without also making a perfect tool for stalking.
Life360, the company that owns Tile, told Wired it “made a number of improvements” after the researchers reported them, but did not detail what those improvements are.
Many of these issues would be mitigated by doing what their competition is already doing: encrypting the broadcasts from its Bluetooth trackers and randomizing MAC addresses. Every company involved in the location tracker industry business has the responsibility to create a safeguard for people, not just for their lost keys.
Hey, San Francisco, There Should be Consequences When Police Spy Illegally
A San Francisco supervisor has proposed that police and other city agencies should have no financial consequences for breaking a landmark surveillance oversight law. In 2019, organizations from across the city worked together to help pass that law, which required law enforcement to get the approval of democratically elected officials before they bought and used new spying technologies. Bit by bit, the San Francisco Police Department and the Board of Supervisors have weakened that law—but one important feature of the law remained: if city officials are caught breaking this law, residents can sue to enforce it, and if they prevail they are entitled to attorney fees.
Now Supervisor Matt Dorsey believes that this important accountability feature is “incentivizing baseless but costly lawsuits that have already squandered hundreds of thousands of taxpayer dollars over bogus alleged violations of a law that has been an onerous mess since it was first enacted.”
Between 2010 and 2023, San Francisco had to spend roughly $70 million to settle civil suits brought against the SFPD for alleged misconduct ranging from shooting city residents to wrongfully firing whistleblowers. This is not “squandered” money; it is compensating people for injury. We are all governed by laws and are all expected to act accordingly—police are not exempt from consequences for using their power wrongfully. In the 21st century, this accountability must extend to using powerful surveillance technology responsibly.
The ability to sue a police department when they violate the law is called a “private right of action” and it is absolutely essential to enforcing the law. Government officials tasked with making other government officials turn square corners will rarely have sufficient resources to do the job alone, and often they will not want to blow the whistle on peers. But city residents empowered to bring a private right of action typically cannot do the job alone, either—they need a lawyer to represent them. So private rights of action provide for an attorney fee award to people who win these cases. This is a routine part of scores of public interest laws involving civil rights, labor safeguards, environmental protection, and more.
Without an enforcement mechanism to hold police accountable, many will just ignore the law. They’ve done it before. AB 481 is a California state law that requires police to get elected official approval before attempting to acquire military equipment, including drones. The SFPD knowingly ignored this law. If it had an enforcement mechanism, more police would follow the rules.
President Trump recently included San Francisco in a list of cities he would like the military to occupy. Law enforcement agencies across the country, either willingly or by compulsion, have been collaborating with federal agencies operating at the behest of the White House. So it would be best for cities to keep their co-optable surveillance infrastructure small, transparent, and accountable. With authoritarianism looming, now is not the time to make police less hard to control—especially considering SFPD has already disclosed surveillance data to Immigration and Customs Enforcement (ICE) in violation of California state law.
We’re calling on the Board of Supervisors to reject Supervisor Dorsey’s proposal. If police want to avoid being sued and forced to pay the prevailing party’s attorney fees, they should avoid breaking the laws that govern police surveillance in the city.
Related Cases: Williams v. San Francisco#StopCensoringAbortion: What We Learned and Where We Go From Here
This is the tenth and final installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
When we launched Stop Censoring Abortion, our goals were to understand how social media platforms were silencing abortion-related content, gather data and lift up stories of censorship, and hold social media companies accountable for the harm they have caused to the reproductive rights movement.
Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and individuals around the world, we confirmed what many already suspected: this speech is being removed, restricted, and silenced by platforms at an alarming rate. Together, our findings paint a clear picture of censorship in action: platforms’ moderation systems are not only broken, but are actively harming those seeking and sharing vital reproductive health information.
Here are the key lessons from this campaign: what we uncovered, how platforms can do better, and why pushing back against this censorship matters more now than ever.
Lessons LearnedAcross our submissions, we saw systemic over-enforcement, vague and convoluted policies, arbitrary takedowns, sudden account bans, and ignored appeals. And in almost every case we reviewed, the posts and accounts in question did not violate any of the platform’s stated rules.
The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But most of the content submitted simply provided factual, educational information that clearly did not violate those rules. As we saw in the M+A Hotline’s case, this kind of misclassification deprives patients, advocates, and researchers of reliable information, and chills those trying to provide accurate and life-saving reproductive health resources.
In one submission, we even saw posts sharing educational abortion resources get flagged under the “Dangerous Organizations and Individuals” policy, a rule intended to prevent terrorism and criminal activity. We’ve seen this policy cause problems in the past, but in the reproductive health space, treating legal and accurate information as violent or unlawful only adds needless stigma and confusion.
Meta’s convoluted advertising policies add another layer of harm. There are specific, additional rules users must navigate to post paid content about abortion. While many of these rules still contain exceptions for purely educational content, Meta is vague about how and when those exceptions apply. And ads that seem like they should have been allowed were frequently flagged under rules about “prescription drugs” or “social issues.” This patchwork of unclear policies forces users to second-guess what content they can post or promote for fear of losing access to their networks.
In another troubling trend, many of our submitters reported experiencing shadowbanning and de-ranking, where posts weren’t removed but were instead quietly suppressed by the algorithm. This kind of suppression leaves advocates without any notice, explanation, or recourse—and severely limits their ability to reach people who need the information most.
Many users also faced sudden account bans without warning or clear justification. Though Meta’s policies dictate that an account should only be disabled or removed after “repeated” violations, organizations like Women Help Women received no warning before seeing their critical connections cut off overnight.
Finally, we learned that Meta’s enforcement outcomes were deeply inconsistent. Users often had their appeals denied and accounts suspended until someone with insider access to Meta could intervene. For example, the Red River’s Women’s Clinic, RISE at Emory, and Aid Access each had their accounts restored only after press attention or personal contacts stepped in. This reliance on backchannels underscores the inequity in Meta’s moderation processes: without connections, users are left unfairly silenced.
It’s Not Just MetaMost of our submissions detailed suppression that took place on one of Meta’s platforms (Facebook, Instagram, Whatsapp and Threads), so we decided to focus our analysis on Meta’s moderation policies and practices. But we should note that this problem is by no means confined to Meta.
On LinkedIn, for example, Stephanie Tillman told us about how she had her entire account permanently taken down, with nothing more than a vague notice that she had violated LinkedIn’s User Agreement. When Stephanie reached out to ask what violation she committed, LinkedIn responded that “due to our Privacy Policy we are unable to release our findings,” leaving her with no clarity or recourse. Stephanie suspects that the ban was related to her work with Repro TLC, an advocacy and clinical health care organization, and/or her posts relating to her personal business, Feminist Midwife LLC. But LinkedIn’s opaque enforcement meant she had no way to confirm these suspicions, and no path to restoring her account.
Screenshot submitted by Stephanie Tillman to EFF (with personal information redacted by EFF)
And over on Tiktok, Brenna Miller, a creator who works in health care and frequently posts about abortion, posted a video of her “unboxing” an abortion pill care package from Carafem. Though Brenna’s video was factual and straightforward, TikTok removed it, saying that she had violated TikTok’s Community Guidelines.
Screenshot submitted by Brenna Miller to EFF
Brenna appealed the removal successfully at first, but a few weeks later the video was permanently deleted—this time, without any explanation or chance to appeal again.
Brenna’s far from the only one experiencing censorship on TikTok. Even Jessica Valenti, award-winning writer, activist, and author of the Abortion Every Day newsletter, recently had a video taken down from TikTok for violating its community guidelines, with no further explanation. The video she posted was about the Trump administration calling IUDs and the Pill ‘abortifacients.’ Jessica wrote:
Which rule did I break? Well, they didn’t say: but I wasn’t trying to sell anything, the video didn’t feature nudity, and I didn’t publish any violence. By process of elimination, that means the video was likely taken down as "misinformation." Which is…ironic.
These are not isolated incidents. In the Center for Intimacy Justice’s survey of reproductive rights advocates, health organizations, sex educators, and businesses, 63% reported having content removed on Meta platforms, 55% reported the same on TikTok, and 66% reported having ads rejected from Google platforms (including YouTube). Clearly, censorship of abortion-related content is a systemic problem across platforms.
How Platforms Can Do Better on Abortion-Related SpeechBased on our findings, we're calling on platforms to take these concrete steps to improve moderation of abortion-related speech:
- Publish clear policies. Users should not have to guess whether their speech is allowed or not.
- Enforce rules consistently. If a post does not violate a written standard, it should not be removed.
- Provide real transparency. Enforcement decisions must come with clear, detailed explanations and meaningful opportunities to appeal.
- Guarantee functional appeals. Users must be able to challenge wrongful takedowns without relying on insider contacts.
- Expand human review. Reproductive rights is a nuanced issue and can be too complex to be left entirely to error-prone automated moderation systems.
Don’t get it twisted: Users should not have to worry about their posts being deleted or their accounts getting banned when they share factual information that doesn’t violate platform policies. The onus is on platforms to get it together and uphold their commitments to users. But while platforms continue to fail, we’ve provided some practical tips to reduce the risk of takedowns, including:
- Consider limiting commonly flagged words and images. Posts with pill images or certain keyword combinations (like “abortion,” “pill,” and “mail”) were often flagged.
- Be as clear as possible. Vague phrases like “we can help you get what you need” might look like drug sales to an algorithm.
- Be careful with links. Direct links to pill providers were often flagged. Spell out the links instead.
- Expect stricter rules for ads. Boosted posts face harsher scrutiny than regular posts.
- Appeal wrongful enforcement decisions. Requesting an appeal might get you a human moderator or, even better, review from Meta’s independent Oversight Board.
- Document everything and back up your content. Screenshot all communications and enforcement decisions so you can share them with the press or advocacy groups, and export your data regularly in case your account vanishes overnight.
Abortion information saves lives, and social media is the primary—and sometimes only—way for advocates and providers to get accurate information out to the masses. But now we have evidence that this censorship is widespread, unjustified, and harming communities who need access to this information most.
Platforms must be held accountable for these harms, and advocates must continue to speak out. The more we push back—through campaigns, reporting, policy advocacy, and user action—the harder it will be for platforms to look away.
So keep speaking out, and keep demanding accountability. Platforms need to know we're paying attention—and we won't stop fighting until everyone can share information about abortion freely, safely, and without fear of being silenced.
This is the tenth and final post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion.
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Tips to Protect Your Posts About Reproductive Health From Being Removed
This is the ninth installment in a blog series documenting EFF’s findings from the Stop Censoring Abortion campaign. You can read additional posts here.
Meta has been getting content moderation wrong for years, like most platforms that host user-generated content. Sometimes it’s a result of deliberate design choices—privacy rollbacks, opaque policies, features that prioritize growth over safety—made even when the company knows that those choices could negatively impact users. Other times, it’s simply the inevitable outcome of trying to govern billions of posts with a mix of algorithms and overstretched human reviewers. Importantly, users shouldn’t have to worry about their posts being deleted or their accounts getting banned when they share factual health information that doesn’t violate the platforms' policies. But knowing more about what the algorithmic moderation is likely to flag can help you to avoid its mistakes.
We analyzed the roughly one-hundred survey submissions we received from social media users in response to our Stop Censoring Abortion campaign. Their stories revealed some clear patterns: certain words, images, and phrases seemed to trigger takedowns, even when posts didn’t come close to violating Meta’s rules.
For example, your post linking to information on how people are accessing abortion pills online clearly is not an offer to buy or sell pills, but an algorithm, or a human content reviewer who doesn’t know for sure, might wrongly flag it for violating Meta’s policies on promoting or selling “restricted goods.”
That doesn’t mean you’re powerless. For years, people have used “algospeak”—creative spelling, euphemisms, or indirection—to sidestep platform filters. Abortion rights advocates are now forced into similar strategies, even when their speech is perfectly legal. It’s not fair, but it might help you keep your content online. Here are some things we learned from our survey:
Practical Tips to Reduce the Risk of TakedownsWhile traditional social media platforms can help people reach larger audiences, using them also generally means you have to hand over control of what you and others are able to see to the people who run the company. This is the deal that large platforms offer—and while most of us want platforms to moderate some content (even if that moderation is imperfect), current systems of moderation often reflect existing societal power imbalances and impact marginalized voices the most.
There are ways companies and governments could better balance the power between users and platforms. In the meantime, there are steps you can take right now to break the hold these platforms have:
- Images and keywords matter. Posts with pill images, or accounts with “pill” in their names, were flagged often—even when the posts weren’t offering to sell medication. Before posting, consider whether you need to include an image of, or the word “pill,” or whether there’s another way to communicate your message.
- Clarity beats vagueness. Saying “we can help you find what you need” or “contact me for more info” might sound innocuous, but to an algorithm, it can look like an offer to sell drugs. Spell out what kind of support you do and don’t provide—for example: “We can talk through options and point you toward trusted resources. We don’t provide medical services or medication.”
- Be careful with links. Direct links to organizations or services that provide abortion pills were often flagged, even if the organizations operate legally. Instead of linking, try spelling out the name of the site or account.
- Certain word combos are red flags. Posts that included words like “mifepristone,” “abortion,” and “mail” together were frequently removed. You may still want to use them—they’re accurate and important—but know they make your post more likely to be flagged.
- Ads are even stricter. Meta requires pharmaceutical advertisers to prove they’re licensed in the countries they target. If you boost posts, assume the more stringent advertising standards will be applied.
Big platforms give you reach, but they also set the rules—and those rules usually favor corporate interests over human rights. You don’t have to accept that as the only way forward:
- Keep a backup. Export your data regularly so you’re not left empty-handed if your account disappears overnight.
- Build your own space. Hosting a website isn’t free, but it puts you in control.
- Explore other platforms. Newsletters, Discord, and other community tools offer more control than Facebook or Instagram. Decentralized platforms like Mastodon and Bluesky aren’t perfect, but they show what’s possible when moderation isn’t dictated from the top down. (Learn more about the differences between Mastodon, Bluesky, and Threads, and how these kinds of platforms help us build a better internet.)
- Push for interoperability. Imagine being able to take your audience with you when you leave a platform. That’s the future we should be fighting for. (For more on interoperability and Meta, check out this video where Cory Doctorow explains what an interoperable Facebook would look like.)
If you’re working in abortion access—whether as a provider, activist, or volunteer—your privacy and security matter. The same is true for patients. Check out EFF’s Surveillance Self-Defense for tailored guides. Look at resources from groups like Digital Defense Fund and learn how location tracking tools can endanger abortion access. If you run an organization, consider some of the ways you can minimize what information you collect about patients, clients, or customers, in our guide to Online Privacy for Nonprofits.
Platforms like Meta insist they want to balance free expression and safety, but their blunt systems consistently end up reinforcing existing inequalities—silencing the very people who most need to be heard. Until they do better, it’s on us to protect ourselves, share our stories, and keep building the kind of internet that respects our rights.
This is the ninth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Flock’s Gunshot Detection Microphones Will Start Listening for Human Voices
Flock Safety, the police technology company most notable for their extensive network of automated license plate readers spread throughout the United States, is rolling out a new and troubling product that may create headaches for the cities that adopt it: detection of “human distress” via audio. As part of their suite of technologies, Flock has been pushing Raven, their version of acoustic gunshot detection. These devices capture sounds in public places and use machine learning to try to identify gunshots and then alert police—but EFF has long warned that they are also high powered microphones parked above densely-populated city streets. Cities now have one more reason to follow the lead of many other municipalities and cancel their Flock contracts, before this new feature causes civil liberties harms to residents and headaches for cities.
In marketing materials, Flock has been touting new features to their Raven product—including the ability of the device to alert police based on sounds, including “distress.” The online ad for the product, which allows cities to apply for early access to the technology, shows the image of police getting an alert for “screaming.”
It’s unclear how this technology works. For acoustic gunshot detection, generally the microphones are looking for sounds that would signify gunshots (though in practice they often mistake car backfires or fireworks for gunshots). Flock needs to come forward now with an explanation of exactly how their new technology functions. It is unclear how these devices will interact with state “eavesdropping” laws that limit listening to or recording the private conversations that often take place in public.
Flock is no stranger to causing legal challenges for the cities and states that adopt their products. In Illinois, Flock was accused of violating state law by allowing Immigration and Customs Enforcement (ICE), a federal agency, access to license plate reader data taken within the state. That’s not all. In 2023, a North Carolina judge halted the installation of Flock cameras statewide for operating in the state without a license. When the city of Evanston, Illinois recently canceled its contract with Flock, it ordered the company to take down their license plate readers–only for Flock to mysteriously reinstall them a few days later. This city has now sent Flock a cease and desist order and in the meantime, has put black tape over the cameras. For some, the technology isn’t worth its mounting downsides. As one Illinois village trustee wrote while explaining his vote to cancel the city’s contract with Flock, “According to our own Civilian Police Oversight Commission, over 99% of Flock alerts do not result in any police action.”
Gunshot detection technology is dangerous enough as it is—police showing up to alerts they think are gunfire only to find children playing with fireworks is a recipe for innocent people to get hurt. This isn’t hypothetical: in Chicago a child really was shot at by police who thought they were responding to a shooting thanks to a ShotSpotter alert. Introducing a new feature that allows these pre-installed Raven microphones all over cities to begin listening for human voices in distress is likely to open up a whole new can of unforeseen legal, civil liberties, and even bodily safety consequences.
Privacy Harm Is Harm
Every day, corporations track our movements through license plate scanners, building detailed profiles of where we go, when we go there, and who we visit. When they do this to us in violation of data privacy laws, we’ve suffered a real harm—period. We shouldn’t need to prove we’ve suffered additional damage, such as physical injury or monetary loss, to have our day in court.
That's why EFF is proud to join an amicus brief in Mata v. Digital Recognition Network, a lawsuit by drivers against a corporation that allegedly violated a California statute that regulates Automatic License Plate Readers (ALPRs). The state trial court erroneously dismissed the case, by misinterpreting this data privacy law to require proof of extra harm beyond privacy harm. The brief was written by the ACLU of Northern California, Stanford’s Juelsgaard Clinic, and UC Law SF’s Center for Constitutional Democracy.
The amicus brief explains:
This case implicates critical questions about whether a California privacy law, enacted to protect people from harmful surveillance, is not just words on paper, but can be an effective tool for people to protect their rights and safety.
California’s Constitution and laws empower people to challenge harmful surveillance at its inception without waiting for its repercussions to manifest through additional harms. A foundation for these protections is article I, section 1, which grants Californians an inalienable right to privacy.
People in the state have long used this constitutional right to challenge the privacy-invading collection of information by private and governmental parties, not only harms that are financial, mental, or physical. Indeed, widely understood notions of privacy harm, as well as references to harm in the California Code, also demonstrate that term’s expansive meaning.
What’s At StakeThe defendant, Digital Recognition Network, also known as DRN Data, is a subsidiary of Motorola Solutions that provides access to a massive searchable database of ALPR data collected by private contractors. Its customers include law enforcement agencies and private companies, such as insurers, lenders, and repossession firms. DRN is the sister company to the infamous surveillance vendor Vigilant Solutions (now Motorola Solutions), and together they have provided data to ICE through a contract with Thomson Reuters.
The consequences of weak privacy protections are already playing out across the country. This year alone, authorities in multiple states have used license plate readers to hunt for people seeking reproductive healthcare. Police officers have used these systems to stalk romantic partners and monitor political activists. ICE has tapped into these networks to track down immigrants and their families for deportation.
Strong Privacy LawsThis case could determine whether privacy laws have real teeth or are just words on paper. If corporations can collect your personal information with impunity—knowing that unless you can prove bodily injury or economic loss, you can’t fight back—then privacy laws lose value.
We need strong data privacy laws. We need a private right of action so when a company violates our data privacy rights, we can sue them. We need a broad definition of “harm,” so we can sue over our lost privacy rights, without having to prove collateral injury. EFF wages this battle when writing privacy laws, when interpreting those laws, and when asserting “standing” in federal and state courts.
The fight for privacy isn’t just about legal technicalities. It’s about preserving your right to move through the world without being constantly tracked, catalogued, and profiled by corporations looking to profit from your personal information.
You can read the amicus brief here.
The UK Is Still Trying to Backdoor Encryption for Apple Users
The Financial Times reports that the U.K. is once again demanding that Apple create a backdoor into its encrypted backup services. The only change since the last time they demanded this is that the order is allegedly limited to only apply to British users. That doesn’t make it any better.
The demand uses a power called a “Technical Capability Notice” (TCN) in the U.K.’s Investigatory Powers Act. At the time of its signing we noted this law would likely be used to demand Apple spy on its users.
After the U.K. government first issued the TCN in January, Apple was forced to either create a backdoor or block its Advanced Data Protection feature—which turns on end-to-end encryption for iCloud—for all U.K. users. The company decided to remove the feature in the U.K. instead of creating the backdoor.
The initial order from January targeted the data of all Apple users. In August, the US claimed the U.K. withdrew the demand, but Apple did not re-enable Advanced Data Protection. The new order provides insight into why: the U.K. was just rewriting it to only apply to British users.
This is still an unsettling overreach that makes U.K. users less safe and less free. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud. It sets a dangerous precedent to demand similar data from other companies, and provides a runway for other authoritarian governments to issue comparable orders. The news of continued server-side access to users' data comes just days after the UK government announced an intrusive mandatory digital ID scheme, framed as a measure against illegal migration.
A tribunal hearing was initially set to take place in January 2026, though it’s currently unclear if that will proceed or if the new order changes the legal process. Apple must continue to refuse these types of backdoors. Breaking end-to-end encryption for one country breaks it for everyone. These repeated attempts to weaken encryption violates fundamental human rights and destroys our right to private spaces.
❌ How Meta Is Censoring Abortion | EFFector 37.13
It's spooky season—but while jump scares may get your heart racing, catching up on digital rights news shouldn't! Our EFFector newsletter has got you covered with easy, bite-sized updates to keep you up-to-date.
In this issue, we spotlight new ALPR-enhanced police drones and how local communities can push back; unpack the ongoing TikTok “ban,” which we’ve consistently said violates the First Amendment; and celebrate a privacy win—abandoning a phone doesn't mean you've also abandoned your privacy rights.
Prefer to listen in? Check out our audio companion, where we interview EFF Staff Attorney Lisa Femia who explains the findings from our investigation into abortion censorship on social media. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.13 - ❌ HOW META IS CENSORING ABORTION
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
EFF Is Standing Up for Federal Employees—Here’s How You Can Stand With Us
Federal employees play a key role in safeguarding the civil liberties of millions of Americans. Our rights to privacy and free expression can only survive when we stand together to push back against overreach and ensure that technology serves all people—not just the powerful.
That’s why EFF jumped to action earlier this year, when the U.S. Office of Personnel Management (OPM) handed over sensitive employee data—Social Security numbers, benefits data, work histories, and more—to Elon Musk’s Department of Government Efficiency (DOGE). This was a blatant violation of the Privacy Act of 1974, and it put federal workers directly at risk.
We didn’t let it stand. Alongside federal employee unions, EFF sued OPM and DOGE in February. In June, we secured a victory when a judge ruled we were entitled to a preliminary injunction and ordered OPM to provide accounting of DOGE access to employee records. Your support makes this possible.
Now the fight continues—and your support matters more than ever. The Office of Personnel Management is planting the seeds to undermine and potentially remove the Combined Federal Campaign (CFC), the main program federal employees and retirees have long used to support charities—including EFF. For now, you can still give to EFF through the CFC this year (use our ID: 10437) and we’d appreciate your support! But with the program’s uncertain future, direct support is the best way to keep our work going strong for years to come.
SUPPORT EFF'S WORK DIRECTLY, BECOME A MEMBER!
When you donate directly, you join a movement of lawyers, activists, and technologists who defend privacy, call out censorship, and push back against abuses of power—everywhere from the courts to Congress and to the streets. As a member, you’ll also receive insider updates, invitations to exclusive events, and receive conversation-starting EFF gear.
Plus, you can sustain our mission long-term with a monthly or annual donation!
Stand with EFF. Protect privacy. Defend free expression. Support our work today.
Related Cases: American Federation of Government Employees v. U.S. Office of Personnel ManagementOpt Out October: Daily Tips to Protect Your Privacy and Security
Trying to take control of your online privacy can feel like a full-time job. But if you break it up into small tasks and take on one project at a time it makes the process of protecting your privacy much easier. This month we’re going to do just that. For the month of October, we’ll update this post with new tips every weekday that show various ways you can opt yourself out of the ways tech giants surveil you.
Online privacy isn’t dead. But the tech giants make it a pain in the butt to achieve. With these incremental tweaks to the services we use, we can throw sand in the gears of the surveillance machine and opt out of the ways tech companies attempt to optimize us into advertisement and content viewing machines. We’re also pushing companies to make more privacy-protective defaults the norm, but until that happens, the onus is on all of us to dig into the settings.
All month long we’ll share tips, including some with the help from our friends at Consumer Reports’ Security Planner tool. Use the Table of Contents here to jump straight to any tip.
Table of Contents
- Tip 1: Establish Good Digital Hygiene
- Tip 2: Coming October 2
- Tip 3: Coming October 3
- Tip 4: Coming October 6
- Tip 5: Coming October 7
- Tip 6: Coming October 8
- Tip 7: Coming October 9
- Tip 8: Coming October 10
- Tip 9: Coming October 14
- Tip 10: Coming October 15
- Tip 11: Coming October 16
- Tip 12: Coming October 17
- Tip 13: Coming October 20
- Tip 14: Coming October 21
- Tip 15: Coming October 22
- Tip 16: Coming October 23
- Tip 17: Coming October 24
- Tip 18: Coming October 27
- Tip 19: Coming October 28
- Tip 20: Coming October 29
- Tip 21: Coming October 30
- Tip 22: Coming October 31
Before we can get into the privacy weeds, we need to first establish strong basics. Namely, two security fundamentals: using strong passwords (a password manager helps simplify this) and two-factor authentication for your online accounts. Together, they can significantly improve your online privacy by making it much harder for your data to fall into the hands of a stranger.
Using unique passwords for every web login means that if your account information ends up in a data breach, it won’t give bad actors an easy way to unlock your other accounts. Since it’s impossible for all of us to remember a unique password for every login we have, most people will want to use a password manager, which generates and stores those passwords for you.
Two-factor authentication is the second lock on those same accounts. In order to login to, say, Facebook for the first time on a particular computer, you’ll need to provide a password and a “second factor,” usually an always-changing numeric code generated in an app or sent to you on another device. This makes it much harder for someone else to get into your account because it’s less likely they’ll have both a password and the temporary code.
This can be a little overwhelming to get started if you’re new to online privacy! Aside from our guides on Surveillance Self-Defense, we recommend taking a look at Consumer Reports’ Security Planner for ways to help you get started setting up your first password manager and turning on two-factor authentication.
Come back tomorrow for another tip!
Platforms Have Failed Us on Abortion Content. Here's How They Can Fix It.
This is the eighth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
In our Stop Censoring Abortion series, we’ve documented the many ways that reproductive rights advocates have faced arbitrary censorship on Meta platforms. Since social media is the primary—and sometimes the only—way that providers, advocates, and communities can safely and effectively share timely and accurate information about abortion, it’s vitally important that platforms take steps to proactively protect this speech.
Yet, even though Meta says its moderation policies allow abortion-related speech, its enforcement of those policies tells a different story. Posts are being wrongfully flagged, accounts are disappearing without warning, and important information is being removed without clear justification.
So what explains the gap between Meta’s public commitments and its actions? And how can we push platforms to be better—to, dare we say, #StopCensoringAbortion?
After reviewing nearly one-hundred submissions and speaking with Meta to clarify their moderation practices, here’s what we’ve learned.
Platforms’ Editorial Freedom to Moderate User ContentFirst, given the current landscape—with some states trying to criminalize speech about abortion—you may be wondering how much leeway platforms like Facebook and Instagram have to choose their own content moderation policies. In other words, can social media companies proactively commit to stop censoring abortion?
The answer is yes. Social media companies, including Meta, TikTok, and X, have the constitutionally protected First Amendment right to moderate user content however they see fit. They can take down posts, suspend accounts, or suppress content for virtually any reason.
The Supreme Court explicitly affirmed this right in 2023 in Moody v. Netchoice, holding that social media platforms, like newspapers, bookstores, and art galleries before them, have the First Amendment right to edit the user speech that they host and deliver to other users on their platforms. The Court also established that the government has a very limited role in dictating what social media platforms must (or must not) publish. This editorial discretion, whether granted to individuals, traditional press, or online platforms, is meant to protect these institutions from government interference and to safeguard the diversity of the public sphere—so that important conversations and movements like this one have the space to flourish.
Meta’s Broken PromisesUnfortunately, Meta is failing to meet even these basic standards. Again and again, its policies say one thing while its actual enforcement says another.
Meta has stated its intent to allow conversations about abortion to take place on its platforms. In fact, as we’ve written previously in this series, Meta has publicly insisted that posts with educational content about abortion access should not be censored, even admitting in several public statements to moderation mistakes and over-enforcement. One spokesperson told the New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space. . . . That’s why we allow posts and ads about, discussing and debating abortion.”
Meta’s platform policies largely reflect this intent. But as our campaign reveals, Meta’s enforcement of those policies is wildly inconsistent. Time and again, users—including advocacy organizations, healthcare providers, and individuals sharing personal stories—have had their content taken down even though it did not actually violate any of Meta’s stated guidelines. Worse, they are often left in the dark about what happened and how to fix it.
Arbitrary enforcement like this harms abortion activists and providers by cutting them off from their audiences, wasting the effort they spend creating resources and building community on these platforms, and silencing their vital reproductive rights advocacy. And it goes without saying that it hurts users, who need access to timely, accurate, and sometimes life-saving information. At a time when abortion rights are under attack, platforms with enormous resources—like Meta—have no excuse for silencing this important speech.
Our Call to PlatformsOur case studies have highlighted that when users can’t rely on platforms to apply their own rules fairly, the result is a widespread chilling effect on online speech. That’s why we are calling on Meta to adopt the following urgent changes.
1. Publish clear and understandable policies.Too often, platforms’ vague rules force users to guess what content might be flagged in order to avoid shadowbanning or worse, leading to needless self-censorship. To prevent this chilling effect, platforms should strive to offer users the greatest possible transparency and clarity on their policies. The policies should be clear enough that users know exactly what is allowed and what isn’t so that, for example, no one is left wondering how exactly a clip of women sharing their abortion experiences could be mislabeled as violent extremism.
2. Enforce rules consistently and fairly.If content doesn’t violate a platform’s stated policies, it should not be removed. And, per Meta’s own policies, an account should not be suspended for abortion-related content violations if it has not received any prior warnings or “strikes.” Yet as we’ve seen throughout this campaign, abortion advocates repeatedly face takedowns or even account suspensions of posts that fall entirely within Meta’s Community Standards. On such a massive scale, this selective enforcement erodes trust and chills entire communities from participating in critical conversations.
3. Provide meaningful transparency in enforcement actions.When content is removed, Meta tends to give vague, boilerplate explanations—or none at all. Instead, users facing takedowns or suspensions deserve detailed and accurate explanations that state the policy violated, reflect the reasoning behind the actual enforcement decision, and ways to appeal the decision. Clear explanations are key to preventing wrongful censorship and ensuring that platforms remain accountable to their commitments and to their users.
4. Guarantee functional appeals.Every user deserves a real chance to challenge improper enforcement decisions and have them reversed. But based on our survey responses, it seems Meta’s appeals process is broken. Many users reported that they do not receive responses to appeals, even when the content did not violate Meta’s policies, and thus have no meaningful way to challenge takedowns. Alarmingly, we found that a user’s best (and sometimes only) chance at success is to rely on a personal connection at Meta to right wrongs and restore content. This is unacceptable. Users should have a reliable and efficient appeal process that does not depend on insider access.
5. Expand human review.Finally, automated systems cannot always handle the nuance of sensitive issues like reproductive health and advocacy. They misinterpret words, miss important cultural or political context, and wrongly flag legitimate advocacy as “dangerous.” Therefore, we call upon platforms to expand the role that human moderators play in reviewing auto-flagged content violations—especially when posts involve sensitive healthcare information or political expression.
Users Deserve BetterMeta has already made the choice to allow speech about abortion on its platforms, and it has not hesitated to highlight that commitment whenever it has faced scrutiny. Now it’s time for Meta to put its money where its mouth is.
Users deserve better than a system where rules are applied at random, appeals go nowhere, and vital reproductive health information is needlessly (or negligently) silenced. If Meta truly values free speech, it must commit to moderating with fairness, transparency, and accountability.
This is the eighth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Gate Crashing: An Interview Series
There is a lot of bad on the internet and it seems to only be getting worse. But one of the things the internet did well, and is worth preserving, is nontraditional paths for creativity, journalism, and criticism. As governments and major corporations throw up more barriers to expression—and more and more gatekeepers try to control the internet—it’s important to learn how to crash through those gates.
In EFF's interview series, Gate Crashing, we talk to people who have used the internet to take nontraditional paths to the very traditional worlds of journalism, creativity, and criticism. We hope it's both inspiring to see these people and enlightening for anyone trying to find voices they like online.
Our mini-series will be dropping an episode each month closing out 2025 in style.
- Episode 1: Fanfiction Becomes Mainstream – Launching October 1*
- Episode 2: From DIY to Publishing – Launching November 1
- Episode 3: A New Path for Journalism – Launching December 1
Be sure to mark your calendar or check our socials on drop dates. If you have a friend or colleague that might be interested in watching our series, please forward this link: eff.org/gatecrashing
For over 35 years, EFF members have empowered attorneys, activists, and technologists to defend civil liberties and human rights online for everyone.
Tech should be a tool for the people, and we need you in this fight.
* This interview was originally published in December 2024. No changes have been made
Wave of Phony News Quotes Affects Everyone—Including EFF
Whether due to generative AI hallucinations or human sloppiness, the internet is increasingly rife with bogus news content—and you can count EFF among the victims.
WinBuzzer published a story June 26 with the headline, “Microsoft Is Getting Sued over Using Nearly 200,000 Pirated Books for AI Training,” containing this passage: winbuzzer_june_26.png
That quotation from EFF’s Corynne McSherry was cited again in two subsequent, related stories by the same journalist—one published July 27, the other August 27.
But the link in that original June 26 post was fake. Corynne McSherry never wrote such an article, and the quote was bogus.
Interestingly, we noted a similar issue with a June 13 post by the same journalist, in which he cited work by EFF Director of Cybersecurity Eva Galperin; this quote included the phrase “get-out-of-jail-free card” too.
Again, the link he inserted leads nowhere because Eva Galperin never wrote such a blog or white paper.
When EFF reached out, the journalist—WinBuzzer founder and editor-in-chief Markus Kasanmascheff—acknowledged via email that the quotes were bogus.
“This indeed must be a case of AI slop. We are using AI tools for research/source analysis/citations. I sincerely apologize for that and this is not the content quality we are aiming for,” he wrote. “I myself have noticed that in the particular case of the EFF for whatever reason non-existing quotes are manufactured. This usually does not happen and I have taken the necessary measures to avoid this in the future. Every single citation and source mention must always be double checked. I have been doing this already but obviously not to the required level.
“I am actually manually editing each article and using AI for some helping tasks. I must have relied too much on it,” he added.
AI slop aboundsIt’s not an isolated incident. Media companies large and small are using AI to generate news content because it’s cheaper than paying for journalists’ salaries, but that savings can come at the cost of the outlets’ reputations.
The U.K.’s Press Gazette reported last month that Wired and Business Insider had to remove news features written by one freelance journalist after concerns the articles are likely AI-generated works of fiction: “Most of the published stories contained case studies of named people whose details Press Gazette was unable to verify online, casting doubt on whether any of the quotes or facts contained in the articles are real.”
And back in May, the Chicago Sun-Times had to apologize after publishing an AI-generated list of books that would make good summer reads—with 10 of the 15 recommended book descriptions and titles found to be “false, or invented out of whole cloth.”
As journalist Peter Sterne wrote for Nieman Lab in 2022:
Another potential risk of relying on large language models to write news articles is the potential for the AI to insert fake quotes. Since the AI is not bound by the same ethical standards as a human journalist, it may include quotes from sources that do not actually exist, or even attribute fake quotes to real people. This could lead to false or misleading reporting, which could damage the credibility of the news organization. It will be important for journalists and newsrooms to carefully fact check any articles written with the help of AI, to ensure the accuracy and integrity of their reporting.
(Or did he write that? Sterne disclosed in that article that he used OpenAI’s ChatGPT-3 to generate that paragraph, ironically enough.)
The Radio Television Digital News Association issued guidelines a few years ago for the use of AI in journalism, and the Associated Press is among many outlets that have developed guidelines of their own. The Poynter Institute offers a template for developing such policies.
Nonetheless, some journalists or media outlets have been caught using AI to generate stories including fake quotes; for example, the Associated Press reported last year that a Wyoming newspaper reporter had filed at least seven stories that included AI-generated quotations from six people.
WinBuzzer wasn’t the only outlet to falsely quote EFF this year. An April 19 article in Wander contained another bogus quotation from Eva Galperin:
April 19 Wander clipping with fake quote from Eva Galperin
An email to the outlet demanding the article’s retraction went unanswered.
In another case, WebProNews published a July 24 article quoting Eva Galperin under the headline “Risika Data Breach Exposes 100M Swedish Records to Fraud Risks,” but Eva confirmed she’d never spoken with them or given that quotation to anyone. The article no longer seems to exist on the outlet’s own website, but it was captured by the Internet Archive’s Wayback Machine.
07-24-2025_webpronews_screenshot.png
A request for comment made through WebProNews’ “Contact Us” page went unanswered, and then they did it again on September 2, this time misattributing a statement to Corynne McSherry:
09-02-2025_webpronews_corynne_mcsherry.png
No such article in The Verge seems to exist, and the statement is not at all in line with EFF’s stance.
The top prize for audacious falsity goes to a June 18 article in the Arabian Post, since removed from the site after we flagged it to an editor. The Arabian Post is part of the Hyphen Digital Network, which describes itself as “at the forefront of AI innovation” and offering “software solutions that streamline workflows to focus on what matters most: insightful storytelling.” The article in question included this passage:
Privacy advocate Linh Nguyen from the Electronic Frontier Foundation remarked that community monitoring tools are playing a civic role, though she warned of the potential for misinformation. “Crowdsourced neighbourhood policing walks a thin line—useful in forcing transparency, but also vulnerable to misidentification and fear-mongering,” she noted in a discussion on digital civil rights.
muck_rack_june_19_-_arabian_post.png
Nobody at EFF recalls anyone named Linh Nguyen ever having worked here, nor have we been able to find anyone by that name who works in the digital privacy sector. So not only was the quotation fake, but apparently the purported source was, too.
Now, EFF is all about having our words spread far and wide. Per our copyright policy, any and all original material on the EFF website may be freely distributed at will under the Creative Commons Attribution 4.0 International License (CC-BY), unless otherwise noted.
But we don't want AI and/or disreputable media outlets making up words for us. False quotations that misstate our positions damage the trust that the public and more reputable media outlets have in us.
If you're worried about this (and rightfully so), the best thing a news consumer can do is invest a little time and energy to learn how to discern the real from the fake. It’s unfortunate that it's the public’s burden to put in this much effort, but while we're adjusting to new tools and a new normal, a little effort now can go a long way.
As we’ve noted before in the context of election misinformation, the nonprofit journalism organization ProPublica has published a handy guide about how to tell if what you’re reading is accurate or “fake news.” And the International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends:
Decoding Meta's Advertising Policies for Abortion Content
This is the seventh installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
For users hoping to promote or boost an abortion-related post on Meta platforms, the Community Standards are just step one. While the Community Standards apply to all posts, paid posts and advertisements must also comply with Meta's Advertising Standards. It’s easy to understand why Meta places extra requirements on paid content. In fact, their “advertising policy principles” outline several important and laudable goals, including promoting transparency and protecting users from scams, fraud, and unsafe and discriminatory practices.
But additional standards bring additional content moderation, and with that comes increased potential for user confusion and moderation errors. Meta’s ad policies, like its enforcement policies, are vague on a number of important questions. Because of this, it’s no surprise that Meta's ad policies repeatedly came up as we reviewed our Stop Censoring Abortion submissions.
There are two important things to understand about these ad policies. First, the ad policies do indeed impose stricter rules on content about abortion—and specifically medication abortion—than Meta’s Community Standards do. To help users better understand what is and isn’t allowed, we took a closer look at the policies and what Meta has said about them.
Second, despite these requirements, the ad policies do not categorically block abortion-related posts from being promoted as ads. In other words, while Meta’s ad policies introduce extra hurdles, they should not, in theory, be a complete barrier to promoting abortion-related posts as boosted content. Still, our analysis revealed that Meta is falling short in several areas.
What’s Allowed Under the Drugs and Pharmaceuticals Policy?When EFF asked Meta about potential ad policy violations, the company first pointed to its Drugs and Pharmaceuticals policy. In the abortion care context, this policy applies to paid content specifically about medication abortion and use of abortion pills. Ads promoting these and other prescription drugs are permitted, but there are additional requirements:
- To reduce risks to consumers, Meta requires advertisers to prove they’re appropriately licensed and get prior authorization from Meta.
- Authorization is limited to online pharmacies, telehealth providers, and pharmaceutical manufacturers.
- The ads also must only target people 18 and older, and only in the countries in which the user is licensed.
Understanding what counts as “promoting prescription drugs” is where things get murky. Crucially, the written policy states that advertisers do not need authorization to run ads that “educate, advocate or give public service announcements related to prescription drugs” or that “promote telehealth services generally.” This should, in theory, leave a critical opening for abortion advocates focused on education and advocacy rather than direct prescription drug sales.
But Meta told EFF that advertisers “must obtain authorization to post ads discussing medical efficacy, legality, accessibility, affordability, and scientific merits and restrict these ads to adults aged 18 or older.” Yet many of these topics—medical efficacy, legality, accessibility—are precisely what educational content and advocacy often address. Where’s the line? This vagueness makes it difficult for abortion pill advocates to understand what’s actually permitted.
What’s Allowed Under the Social Issues Policy?Meta also told EFF that its Ads about Social Issues, Elections or Politics policy may apply to a range of abortion-related content. Under this policy, advertisers within certain countries—including the U.S.—must meet several requirements before running ads about certain “social issues.” Requirements include:
- Completing Meta’s social issues authorization process;
- Including a verified "Paid for by" disclaimer on the ad; and
- Complying with all applicable laws and regulations.
While certain news publishers are exempt from the policy, it otherwise applies to a wide range of accounts, including activists, brands, non-profit groups and political organizations.
Meta defines “social issues” as “sensitive topics that are heavily debated, may influence the outcome of an election or result in/relate to existing or proposed legislation.” What falls under this definition differs by country, and Meta provides country-specific topics lists and examples. In the U.S. and several other countries, ads that include “discussion, debate, or advocacy for or against...abortion services and pro-choice/pro-life advocacy” qualify as social issues ads under the “Civil and Social Rights” category.
Confusingly, Meta differentiates this from ads that primarily sell a product or promote a service, which do not require authorization or disclaimers, even if the ad secondarily includes advocacy for an issue. For instance, according to Meta's examples, an ad that says, “How can we address systemic racism?” counts as a social issues ad and requires authorization and disclaimers. On the other hand, an ad that says, “We have over 100 newly-published books about systemic racism and Black History now on sale” primarily promotes a product, and would not require authorization and disclaimers. But even with Meta's examples, the line is still blurry. This vagueness invites confusion and content moderation errors.
Oddly, Meta never specifically identified its Health and Wellness ad policy to EFF, though the policy is directly relevant to abortion-related paid content. This policy addresses ads about reproductive health and family planning services, and requires ads regarding “abortion medical consultation and related services” to be targeted at users 18 and older. It also expressly states that for paid content involving “[r]eproductive health and wellness drugs or treatments that require prescription,” accounts must comply with both this policy and the Drugs and Pharmaceuticals policy.
This means abortion advocates must navigate the Drugs and Pharmaceuticals policy, the Social Issues policy, and the Health and Wellness policy—each with its own requirements and authorization processes. That Meta didn’t mention this highly relevant policy when asked about abortion advertising underscores how confusingly dispersed these rules are.
Like the Drugs policy, the Health and Wellness policy contains an important education exception for abortion advocates: The age-targeting requirements do not apply to “[e]ducational material or information about family planning services without any direct promotion or facilitation of the services.”
When Content Moderation Makes MistakesMeta's complex policies create fertile ground for automated moderation errors. Our Stop Censoring Abortion survey submissions revealed that Meta's systems repeatedly misidentified educational abortion content as Community Standards violations. The same over-moderation problems are also a risk in the advertising context.
On top of that, content moderation errors even on unpaid posts can trigger advertising restrictions and penalties. Meta's advertising restrictions policy states that Community Standards violations can result in restricted advertising features or complete advertising bans. This creates a compounding problem when educational content about abortion is wrongly flagged. Abortion advocates could face a double penalty: first their content is removed, then their ability to advertise is restricted.
This may be, in part, what happened to Red River Women's Clinic, a Minnesota abortion clinic we wrote about earlier in this series. When its account was incorrectly suspended for violating the “Community Standards on drugs,” the clinic appealed and eventually reached out to a contact at Meta. When Meta finally removed the incorrect flag and restored the account, Red River received a message informing them they were no longer out of compliance with the advertising restrictions policy.
Screenshot submitted by Red River Women's Clinic to EFF
How Meta Can ImproveOur review of the ad policies and survey submissions showed that there is room for improvement in how Meta handles abortion-related advertising.
First, Meta should clarify what is permitted without prior authorization under the Drugs and Pharmaceuticals policy. As noted above, the policies say advertisers do not need authorization to “educate, advocate or give public service announcements,” but Meta told EFF authorization is needed to promote posts discussing “medical efficacy, legality, accessibility, affordability, and scientific merits.” Users should be able to more easily determine what content falls on each side of that line.
Second, Meta should clarify when its Social Issues policy applies. Does discussing abortion at all trigger its application? Meta says the policy excludes posts primarily advertising a service, yet this is not what survey respondent Lynsey Bourke experienced. She runs the Instagram account Rouge Doulas, a global abortion support collective and doula training school. Rouge Doulas had a paid post removed under this very policy for advertising something that is clearly a service: its doula training program called “Rouge Abortion Doula School.” The policy’s current ambiguity makes it difficult for advocates to create compliant content with confidence.
Third, and as EFF has previously argued, Meta should ensure its automated system is not over-moderating. Meta must also provide a meaningful appeals process for when errors inevitably occur. Automated systems are blunt tools and are bound to make mistakes on complex topics like abortion. But simply using an image of a pill on an educational post shouldn’t automatically trigger takedowns. Improving automated moderation will help correct the cascading effect of incorrect Community Standards flags triggering advertising restrictions.
With clearer policies, better moderation, and a commitment to transparency, Meta can make it easier for accounts to share and boost vital reproductive health information.
This is the seventh post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Protecting Access to the Law—and Beneficial Uses of AI
As the first copyright cases concerning AI reach appeals courts, EFF wants to protect important, beneficial uses of this technology—including AI for legal research. That’s why we weighed in on the long-running case of Thomson Reuters v. ROSS Intelligence. This case raises at least two important issues: the use of (possibly) copyrighted material to train a machine learning AI system, and public access to legal texts.
ROSS Intelligence was a legal research startup that built an AI-based tool for locating judges’ written opinions based on natural language queries—a competitor to ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. To build its tool, ROSS hired another firm to read through thousands of the “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. ROSS used those paraphrases to train its tool. Importantly, the ROSS tool didn’t output any West headnotes, or even the paraphrases of those headnotes—it simply directed the user to the original judges’ decisions. Still, Thomson sued ROSS for copyright infringement, arguing that using the headnotes without permission was illegal.
Early decisions in the suit were encouraging. EFF wrote about how the court allowed ROSS to bring an antitrust counterclaim against Thomson Reuters, letting them try to prove that Thomson was abusing monopoly power. And the trial judge initially ruled that ROSS’s use of the West headnotes was fair use under copyright law.
The case then took turns for the worse. ROSS was unable to prove its antitrust claim. The trial judge issued a new opinion reversing his earlier decision and finding that ROSS’s use was not fair but rather infringed Thomson’s copyrights. And in the meantime, ROSS had gone out of business (though it continues to defend itself in court).
The court’s new decision on copyright was particularly worrisome. It ruled that West headnotes—a few lines of text copying or summarizing a single legal conclusion from a judge’s written opinion—could be copyrighted, and that using them to train the ROSS tool was not fair use, in part because ROSS was a competitor to Thomson Reuters. And the court rejected ROSS’s attempt to avoid any illegal copying by using a “clean room” procedure often used in software development. The decision also threatens to limit the public’s access to legal texts.
EFF weighed in with an amicus brief joined by the American Library Association, the Association of Research Libraries, the Internet Archive, Public Knowledge, and Public.Resource.Org. We argued that West headnotes are not copyrightable in the first place, since they simply restate individual points from judges’ opinions with no meaningful creative contributions. And even if copyright does attach to the headnotes, we argued, the source material is entirely factual statements about what the law is, and West’s contribution was minimal, so fair use should have tipped in ROSS’s favor. The trial judge had found that the factual nature of the headnotes favored ROSS, but dismissed this factor as unimportant, effectively writing it out of the law.
This case is one of the first to touch on copyright and AI, and is likely to influence many of the other cases that are already pending (with more being filed all the time). That’s why we’re trying to help the appeals court get this one right. The law should encourage the creation of AI tools to digest and identify facts for use by researchers, including facts about the law.
Towards the 10th Summit of the Americas: Concerns and Recommendations from Civil Society
This post is an adapted version of the article originally published at Silla Vacía
Heads of state and governments of the Americas will gather this December at the Tenth Summit of the Americas in the Dominican Republic to discuss challenges and opportunities facing the region’s nations. As part of the Summit of the Americas’ Process, which had its first meeting in 1994, the theme of this year’s summit is "Building a Secure and Sustainable Hemisphere with Shared Prosperity.”
More than twenty civil society organizations, including EFF, released a joint contribution ahead of the summit addressing the intersection between technology and human rights. Although the meeting's concept paper is silent about the role of digital technologies in the scope of this year's summit, the joint contribution stresses that the development and use of technologies is a cross-cutting issue and will likely be integrated into policies and actions agreed upon at the meeting.
Human Security, Its Core Dimensions, and Digital Technologies
The concept paper indicates that people in the Americas, like the rest of the world, are living in times of uncertainty and geopolitical, socioeconomic, and environmental challenges that require urgent actions to ensure human security in multiple dimensions. It identifies four key areas: citizen security, food security, energy security, and water security.
The potential of digital technologies cuts across these areas of concern and will very likely be considered in the measures, plans, and policies that states take up in the context of the summit, both at the national level and through regional cooperation. Yet, when harnessing the potential of emerging technologies, their challenges also surface. For example, AI algorithms can help predict demand peaks and manage energy flows in real time on power grids, but the infrastructure required for the growing and massive operation of AI systems itself poses challenges to energy security.
In Latin America, the imperative of safeguarding rights in the face of already documented risks and harmful impacts stands out particularly in citizen security. The abuse of surveillance powers, enhanced by digital technologies, is a recurring and widespread problem in the region.
It is intertwined with deep historical roots of a culture of secrecy and permissiveness that obstructs implementing robust privacy safeguards, effective independent oversight, and adequate remedies for violations. The proposal in the concept paper for creating a Hemispheric Platform of Action for Citizen and Community Security cannot ignore—and above all, must not reinforce—these problems.
It is crucial that the notion of security embedded in the Tenth Summit's focus on human security be based on human development, the protection of rights, and the promotion of social well-being, especially for historically discriminated against groups. It is also essential that it moves away from securitization and militarization, which have been used for social control, silencing dissent, harassing human rights defenders and community leaders, and restricting the rights and guarantees of migrants and people in situations of mobility.
Toward Regional Commitments Anchored in Human Rights
In light of these concerns, the joint contribution signed by EFF, Derechos Digitales, Wikimedia Foundation, CELE, ARTICLE 19 – Office for Mexico and Central America, among other civil society organizations, addresses the following:
-- The importance of strengthening the digital civic space, which requires robust digital infrastructure and policies for connectivity and digital inclusion, as well as civic participation and transparency in the formulation of public policies.
-- Challenges posed by the growing surveillance capabilities of states in the region through the increasing adoption of ever more intrusive technologies and practices without necessary safeguards.
-- State obligations established under the Inter-American Human Rights System and key standards affirmed by the Inter-American Court in the case of Members of the Jose Alvear Restrepo Lawyers Collective (CAJAR) v. Colombia.
-- A perspective on state digitalization and innovation centered on human rights, based on thorough analysis of current problems and gaps and their detrimental impacts on people. The insufficiency or absence of meaningful mechanisms for public participation, transparency, and evaluation are striking features of various experiences across countries in the Americas.
Finally, the contribution makes recommendations for regional cooperation, promoting shared solutions and joint efforts at the regional level anchored in human rights, justice, and inclusion.
We hope the joint contribution reinforces a human rights-based perspective across the debates and agreements at the summit. When security-related abuses abound facilitated by digital technologies, regional cooperation towards shared prosperity must take into account these risks and put justice and people's well-being at the center of any unfolding initiatives.
EFF Urges Virgina Court of Appeals to Require Search Warrants to Access ALPR Databases
This post was co-authored by EFF legal intern Olivia Miller.
For most Americans—driving is a part of everyday life. Practically speaking, many of us drive to work, school, play, and anywhere in between. Not only do we visit places that give insights into our personal lives, but we sometimes use vehicles as a mode of displaying information about our political beliefs, socioeconomic status, and other intimate details.
All of this personal activity can be tracked and identified through Automatic License Plate Reader (ALPR) data—a popular surveillance tool used by law enforcement agencies across the country. That’s why, in an amicus brief filed with the Virginia Court of Appeals, EFF, the ACLU of Virginia, and NACDL urged the court to require police to seek a warrant before searching ALPR data.
In Commonwealth v. Church, a police officer in Norfolk, Virginia searched license plate data without a warrant—not to prove that defendant Ronnie Church was at the scene of the crime, but merely to try to show he had a “guilty mind.” The lower court, in a one-page ruling relying on Commonwealth v. Bell, held this warrantless search violated the Fourth Amendment and suppressed the ALPR evidence. We argued the appellate court should uphold this decision.
Like the cellphone location data the Supreme Court protected in Carpenter v. United States, ALPR data threatens peoples’ privacy because it is collected indiscriminately over time and can provide police with a detailed picture of a person’s movements. ALPR data includes photos of license plates, vehicle make and model, any distinctive features of the vehicle, and precise time and location information. Once an ALPR logs a car’s data, the information is uploaded to the cloud and made accessible to law enforcement agencies at the local, state, and federal level—creating a near real-time tracking tool that can follow individuals across vast distances.
Think police only use ALPRs to track suspected criminals? Think again. ALPRs are ubiquitous; every car traveling into the camera’s view generates a detailed dataset, regardless of any suspected criminal activity. In fact, a survey of 173 law enforcement agencies employing ALPRs nationwide revealed that 99.5% of scans belonged to people who had no association to crime.
Norfolk County, Virginia, is home to over 170 ALPR cameras operated by Flock, a surveillance company that maintains over 83,000 ALPRs nationwide. The resulting surveillance network is so large that Norfolk county’s police chief suggested “it would be difficult to drive any distance and not be recorded by one.”
Recent and near-horizon advancements in Flock’s products will continue to threaten our privacy and further the surveillance state. For example, Flock’s ALPR data has been used for immigration raids, to track individuals seeking abortion-related care, to conduct fishing expeditions, and to identify relationships between people who may be traveling together but in different cars. With the help of artificial intelligence, ALPR databases could be aggregated with other information from data breaches and data brokers, to create “people lookup tools.” Even public safety advocates and law enforcement, like the International Association of Chiefs of Police, have warned that ALPR tech creates a risk “that individuals will become more cautious in their exercise of their protected rights of expression, protest, association, political participation because they consider themselves under constant surveillance.”
This is why a warrant requirement for ALPR data is so important. As the Virginia trial court previously found in Bell, prolonged tracking of public movements with surveillance invades peoples’ reasonable expectation of privacy in the entirety of their movements. Recent Fourth Amendment jurisprudence, including Carpenter and Leaders of a Beautiful Struggle from the federal Fourth Circuit Court of Appeals favors a warrant requirement as well. Like the technologies at issue in those cases, ALPRs give police the ability to chronicle movements in a “detailed, encyclopedic” record, akin to “attaching an ankle monitor to every person in the city.”
The Virginia Court of Appeals has a chance to draw a clear line on warrantless ALPR surveillance, and to tell Norfolk PD what the Fourth Amendment already says: come back with a warrant.
Chat Control Is Back on the Menu in the EU. It Still Must Be Stopped
The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.
Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.
Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.
This is absurd.
We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.
If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.
This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.
Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”
Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.
We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.
Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.
Further reading:
After Years Behind Bars, Alaa Is Free at Last
Alaa Abd El Fattah is finally free and at home with his family. On September 22, it was announced that Egyptian President Abdel Fattah al-Sisi had issued a pardon for Alaa’s release after six years in prison. One day later, the BBC shared video of Alaa dancing with his family in their Cairo home and hugging his mother Laila and sister Sanaa, as well as other visitors.
Alaa's sister, Mona Seif, posted on X: "An exceptionally kind day. Alaa is free."
Alaa has spent most of the last decade behind bars, punished for little more than his words. In June 2014, Egypt accused him of violating its protest law and attacking a police officer. He was convicted in absentia and sentenced to fifteen years in prison, after being prohibited from entering the courthouse. Following an appeal, Alaa was granted a retrial, and sentenced in February 2015 to five years in prison. In 2019, he was finally released, first into police custody then to his family. As part of his parole, he was told he would have to spend every night of the next five years at a police station, but six months later—on September 29, 2019—Alaa was re-arrested in a massive sweep of activists and charged with spreading false news and belonging to a terrorist organisation after sharing a Facebook post about torture in Egypt.
Despite that sentence effectively ending on September 29, 2024, one year ago today, Egyptian authorities continued his detention, stating that he would be released in January 2027—violating both international legal norms and Egypt’s own domestic law. As Amnesty International reported, Alaa faced inhumane conditions during his imprisonment, “including denial of access to lawyers, consular visits, fresh air, and sunlight,” and his family repeatedly spoke of concerns about his health, particularly during periods in which he engaged in hunger strike.
When Egyptian authorities failed to release Alaa last year, his mother, Laila Soueif, launched a hunger strike. Her action stretched to an astonishing 287 days, during which she was hospitalized twice in London and nearly lost her life. She continued until July of this year, when she finally ended the strike following direct commitments from UK officials that Alaa would be freed.
Throughout this time, a broad coalition, including EFF, rallied around Alaa: international human rights organizations, senior UK parliamentarians, former British Ambassador John Casson, and fellow former political prisoner Nazanin Zaghari-Ratcliffe all lent their voices. Celebrities joined the call, while the UN Working Group on Arbitrary Detention declared his imprisonment unlawful and demanded his release. This groundswell of solidarity was decisive in securing his release.
Alaa’s release is an extraordinary relief for his family and all who have campaigned on his behalf. EFF wholeheartedly celebrates Alaa’s freedom and reunification with his family.
But we must remain vigilant. Alaa must be allowed to travel to the UK to be reunited with his son Khaled, who currently lives with his mother and attends school there. Furthermore, we continue to press for the release of those who remain imprisoned for nothing more than exercising their right to speak.