EFF: Updates
Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards
On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”
But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.
Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.
Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users.
The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’”
A design for the Most Wanted CEO playing cards
Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition).
A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.
As we’ve said many times, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especially true for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people's livelihoods.
Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturned multiple moderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.)
TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise, celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines.
These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.
On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”
Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation, uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.
Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.
Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton
The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.
The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.
The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users’ First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.
1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why:
While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.
First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.
Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.
The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.
Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway.
Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.
Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.
2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare.Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.
There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.
The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy.
3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography.
It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media.
4. The Supreme Court Has Rightly Struck Down Similar Laws Before.In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.
Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear.
The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web.
Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear.
5. There is No Safe, Privacy Protecting Age-Verification Technology.The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act.
The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law
Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights.
For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review.
Meta’s New Content Policy Will Harm Vulnerable Users. If It Really Valued Free Speech, It Would Make These Changes
Earlier this week, when Meta announced changes to their content moderation processes, we were hopeful that some of those changes—which we will address in more detail in this post—would enable greater freedom of expression on the company’s platforms, something for which we have advocated for many years. While Meta’s initial announcement primarily addressed changes to its misinformation policies and included rolling back over-enforcement and automated tools that we have long criticized, we expressed hope that “Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ+ speech, political dissidence, and sex work.”
Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy.
However, shortly after our initial statement was published, we became aware that rather than addressing those historically over-moderated subjects, Meta was taking the opposite tack and —as reported by the Independent—was making targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups.
It was our mistake to formulate our responses and expectations on what is essentially a marketing video for upcoming policy changes before any of those changes were reflected in their documentation. We prefer to focus on the actual impacts of online censorship felt by people, which tends to be further removed from the stated policies outlined in community guidelines and terms of service documents. Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy. These first changes to actually surface in Facebook's community standards document seem to be in the same vein.
Specifically, Meta’s hateful conduct policy now contains the following:
- People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech.
But the implementation of this policy shows that it is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging the legitimacy of LGBTQ+ rights. For example,
- While allegations of mental illness against people based on their protected characteristics remain a tier 2 violation, the revised policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism [sic] and homosexuality.”
- The revised policy now specifies that Meta allows speech advocating gender-based and sexual orientation-based-exclusion from military, law enforcement, and teaching jobs, and from sports leagues and bathrooms.
- The revised policy also removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics.
These changes reveal that Meta seems less interested in freedom of expression as a principle and more focused on appeasing the incoming U.S. administration, a concern we mentioned in our initial statement with respect to the announced move of the content policy team from California to Texas to address “appearances of bias.” Meta said it would be making some changes to reflect that these topics are “the subject of frequent political discourse and debate” and can be said “on TV or the floor of Congress.” But if that is truly Meta’s new standard, we are struck by how selectively it is being rolled out, and particularly allowing more anti-LGBTQ+ speech.
We continue to stand firmly against hateful anti-trans content remaining on Meta’s platforms, and strongly condemn any policy change directly aimed at enabling hate toward vulnerable communities—both in the U.S. and internationally.
Real and Sincere Reforms to Content Moderation Can Both Promote Freedom of Expression and Protect Marginalized UsersIn its initial announcement, Meta also said it would change how policies are enforced to reduce mistakes, stop reliance on automated systems to flag every piece of content, and add staff to review appeals. We believe that, in theory, these are positive measures that should result in less censorship of expression for which Meta has long been criticized by the global digital rights community, as well as by artists, sex worker advocacy groups, LGBTQ+ advocates, Palestine advocates, and political groups, among others.
But we are aware that these problems, at a corporation with a history of biased and harmful moderation like Meta, need a careful, well-thought-out, and sincere fix that will not undermine broader freedom of expression goals.
For more than a decade, EFF has been critical of the impact that content moderation at scale—and automated content moderation in particular—has on various groups. If Meta is truly interested in promoting freedom of expression across its platforms, we renew our calls to prioritize the following much-needed improvements instead of allowing more hateful speech.
Meta Must Invest in Its Global User Base and Cover More LanguagesMeta has long failed to invest in providing cultural and linguistic competence in its moderation practices often leading to inaccurate removal of content as well as a greater reliance on (faulty) automation tools. This has been apparent to us for a long time. In the wake of the 2011 Arab uprisings, we documented our concerns with Facebook’s reporting processes and their effect on activists in the Middle East and North Africa. More recently, the need for cultural competence in the industry generally was emphasized in the revised Santa Clara Principles.
Over the years, Meta’s global shortcomings became even more apparent as its platforms were used to promote hate and extremism in a number of locales. One key example is the platform’s failure to moderate anti-Rohingya sentiment in Myanmar—the direct result of having far too few Burmese-speaking moderators (in 2015, as extreme violence and violent sentiment toward the Rohingya was well underway, there were just two such moderators).
If Meta is indeed going to roll back the use of automation to flag and action most content and ensure that appeals systems work effectively, which will solve some of these problems, it must also invest globally in qualified content moderation personnel to make sure that content from countries outside of the United States and in languages other than English is fairly moderated.
Reliance on Automation to Flag Extremist Content Allows for Flawed ModerationWe have long been critical of Meta’s over-enforcement of terrorist and extremist speech, specifically of the impact it has on human rights content. Part of the problem is Meta’s over-reliance on moderation to flag extremist content. A 2020 document reviewing moderation across the Middle East and North Africa claimed that algorithms used to detect terrorist content in Arabic incorrectly flag posts 77 percent of the time.
More recently, we have seen this with Meta’s automated moderation to remove the phrase “from the river to the sea.” As we argued in a submission to the Oversight Board—with which the Board also agreed—moderation decisions must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.
Another example of this problem that has overlapped with Meta’s shortcomings with respect to linguistic competence is in relation to the term “shaheed,” which translates most closely to “martyr” and is used by Arabic speakers and many non-Arabic-speaking Muslims elsewhere in the world to refer primarily (though not exclusively) to individuals who have died in the pursuit of ideological causes. As we argued in our joint submission with ECNL to the Meta Oversight Board, use of the term is context-dependent, but Meta has used automated moderation to indiscriminately remove instances of the word. In their policy advisory opinion, the Oversight Board noted that any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”
Marginalized communities that experience persecution offline often face disproportionate censorship online. It is imperative that Meta recognize the responsibilities it has to its global user base in upholding free expression, particularly of communities that may otherwise face censorship in their home countries.
Sexually-Themed Content Remains Subject to Discriminatory Over-censorshipOur critique of Meta’s removal of sexually-themed content goes back more than a decade. The company’s policies on adult sexual activity and nudity affect a wide range of people and communities, but most acutely impact LGBTQ+ individuals and sex workers. Typically aimed at keeping sites “family friendly” or “protecting the children,” these policies are often unevenly enforced, often classifying LGBTQ+ content as “adult” or “harmful” when similar heterosexual content isn’t. These policies were often written and enforced discriminatorily and at the expense of gender-fluid and nonbinary speakers—we joined in the We the Nipple campaign aimed at remedying this discrimination.
In the midst of ongoing political divisions, issues like this have a serious impact on social media users.
Most nude content is legal, and engaging with such material online provides individuals with a safe and open framework to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With Meta intervening to become the arbiters of how people create and engage with nudity and sexuality—both offline and in the digital space—a crucial form of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down.
Over-removal of Abortion Content Stifles User Access to Essential InformationThe removal of abortion-related posts on Meta platforms containing the word ‘kill’ have failed to meet the criteria for restricting users’ right to freedom of expression. Meta has regularly over-removed abortion related content, hamstringing its user’s ability to voice their political beliefs. The use of automated tools for content moderation leads to the biased removal of this language, as well as essential information. In 2022, Vice reported that a Facebook post stating "abortion pills can be mailed" was flagged within seconds of it being posted.
At a time when bills are being tabled across the U.S. to restrict the exchange of abortion-related information online, reproductive justice and safe access to abortion, like so many other aspects of managing our healthcare, is fundamentally tied to our digital lives. And with corporations deciding what content is hosted online, the impact of this removal is exacerbated.
What was benign data online is effectively now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. Meta must adhere to its responsibility to respect international human rights law, and ensure that any abortion-related content removal be both necessary and proportionate.
Meta’s symbolic move of its content team from California to Texas, a state that is aiming to make the distribution of abortion information illegal, also raises serious concerns that Meta will backslide on this issue—in line with local Texan state law banning abortion—rather than make improvements.
Meta Must Do Better to Provide Users With TransparencyEFF has been critical of Facebook’s lack of transparency for a long time. When it comes to content moderation the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.
Meta makes billions from its own exploitation of our data, too often choosing their profits over our privacy—opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of the corporation’s harms—that its core business model depends on collecting as much information about users as possible, then using that data to target ads, as well as target competitors.
That’s why EFF, with others, launched the Santa Clara Principles on how corporations like Meta can best obtain meaningful transparency and accountability around the increasingly aggressive moderation of user-generated content. And as platforms like Facebook, Instagram, and X continue to occupy an even bigger role in arbitrating our speech and controlling our data, there is an increased urgency to ensure that their reach is not only stifled, but reduced.
Flawed Approach to Moderating Misinformation with CensorshipMisinformation has been thriving on social media platforms, including Meta. As we said in our initial statement, and have written before, Meta and other platforms should use a variety of fact-checking and verification tools available to it, including both community notes and professional fact-checkers, and have robust systems in place to check against any flagging that results from it.
Meta and other platforms should also employ media literacy tools such as encouraging users to read articles before sharing them, and to provide resources to help their users assess reliability of information on the site. We have also called for Meta and others to stop privileging governmental officials by providing them with greater opportunities to lie than other users.
While we expressed some hope on Tuesday, the cynicism expressed by others seems warranted now. Over the years, EFF and many others have worked to push Meta to make improvements. We've had some success with its "Real Names" policy, for example, which disproportionately affected the LGBTQ community and political dissidents. We also fought for, and won improvements on, Meta's policy on allowing images of breastfeeding, rather than marking them as "sexual content." If Meta truly values freedom of expression, we urge it to redirect its focus to empowering historically marginalized speakers, rather than empowering only their detractors.
EFF Statement on Meta's Announcement of Revisions to Its Content Moderation Processes
In general, EFF supports moves that bring more freedom of expression and transparency to platforms—regardless of their political motivation. We’re encouraged by Meta's recognition that automated flagging and responses to flagged content have caused all sorts of mistakes in moderation. Just this week, it was reported that some of those "mistakes" were heavily censoring LGBTQ+ content. We sincerely hope that the lightened restrictions announced by Meta will apply uniformly, and not just to hot-button U.S. political topics.
Censorship, broadly, is not the answer to misinformation. We encourage social media companies to employ a variety of non-censorship tools to address problematic speech on their platforms and fact-checking can be one of those tools. Community notes, essentially crowd-sourced fact-checking, can be a very valuable tool for addressing misinformation and potentially give greater control to users. But fact-checking by professional organizations with ready access to subject-matter expertise can be another. This has proved especially true in international contexts where they have been instrumental in refuting, for example, genocide denial.
So, even if Meta is changing how it uses and preferences fact-checking entities, we hope that Meta will continue to look to fact-checking entities as an available tool. Meta does not have to, and should not, choose one system to the exclusion of the other.
Importantly, misinformation is only one of many content moderation challenges facing Meta and other social media companies. We hope Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ speech, political dissidence, and sex work.
Meta’s decision to move its content teams from California to “help reduce the concern that biased employees are overly censoring content” seems more political than practical. There is of course no population that is inherently free from bias and by moving to Texas, the “concern” will likely not be reduced, but just relocated from perceived “California bias” to perceived “Texas bias.”
Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in the United States. We applaud Meta’s efforts to try to fix its over-censorship problem but will watch closely to make sure it is a good-faith effort and rolled out fairly and not merely a political maneuver to accommodate the upcoming U.S. administration change.
Sixth Circuit Rules Against Net Neutrality; EFF Will Continue to Fight
Last week, the Sixth U.S. Circuit Court of Appeals ruled against the FCC, rejecting its authority to classify broadband as a Title II “telecommunications service.” In doing so, the court removed net neutrality protections for all Americans and took away the FCC’s ability to meaningfully regulate internet service providers.
This ruling fundamentally gets wrong the reality of internet service we all live with every day. Nearly 80% of Americans view broadband access to be as important as water and electricity. It is no longer an extra, non-necessary “information service,” as it was seen 40 years ago, but it is a vital medium of communication in everyday life. Business, health services, education, entertainment, our social lives, and more have increasingly moved online. By ruling that broadband “information service” and not a “telecommunications service” this court is saying that the ISPs that control your broadband access will continue to face little to no oversight for their actions.
This is intolerable.
Net neutrality is the principle that ISPs treat all data that travels over their network equally, without improper discrimination in favor of particular apps, sites, or services. At its core, net neutrality is a principle of equity and protector of innovation—that, at least online, large monopolistic ISPs don’t get to determine winners and losers. Net neutrality ensures that users determine their online experience, not ISPs. As such, it is fundamental to user choice, access to information, and free expression online.
By removing protections against actions like blocking, throttling, and paid prioritization, the court gives those willing and able to pay ISPs an advantage over those who are not. It privileges large legacy corporations that have partnerships with the big ISPs, and it means that newer, smaller, or niche services will have trouble competing, even if they offer a superior service. It means that ISPs can throttle your service–or that of, say, a fire department fighting the largest wildfire in state history. They can block a service they don’t like. In addition to charging you for access to the internet, they can charge services and websites for access to you, artificially driving up costs. And where most Americans have little choice in home broadband providers, it means these ISPs will be able to exercise their monopoly power not just on the price you pay for access, but how you access and engage with information as well.
Moving forward, now more than ever it becomes important for individual states to pass their own net neutrality laws, or defend the ones they have on the books. California passed a gold standard net neutrality law in 2018 that has survived judicial scrutiny. It is up to us to ensure it remains in place.
Congress can also end this endless whiplash of reclassification and decide, once and for all, by passing a law classifying broadband internet services firmly under Title II. Such proposals have been introduced before; they ought to be introduced again.
This is a bad ruling for Team Internet, but we are resilient. EFF–standing with users, innovators, creators, public interest advocates, librarians, educators, and everyone else who relies on the open internet–will continue to champion the principles of net neutrality and work toward an equitable and open internet for all.
Last Call: The Combined Federal Campaign Pledge Period Closes on January 15!
The pledge period for the Combined Federal Campaign (CFC) closes on Wednesday, January 15! If you're a U.S. federal employee or retiree, now is the time to make your pledge and support EFF’s work to protect your rights online.
If you haven’t before, giving to EFF through the CFC is quick and easy! Just head on over to GiveCFC.org and click “DONATE.” Then you can search for EFF using our CFC ID 10437 and make a pledge via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can also choose to increase your support there as well!
The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year members of this community raised nearly $34,000 to support EFF’s initiatives advocating for privacy and free expression online. That support has helped us:
- Fight for the public's right to access police drone footage
- Encourage the Fifth Circuit Court of Appeals to rule that location-based geofence warrants are unconstitutional
- Push back against countless censorship laws, including the Kids Online Safety Act
- Continue to see more of the web encrypted thanks to Certbot and Let's Encrypt
Federal employees and retirees have a tremendous impact on our democracy and the future of civil liberties and human rights online. By making a pledge through the CFC, you can shape a future where your privacy and free speech rights are protected. Make your pledge today using EFF’s CFC ID 10437!
EFF Goes to Court to Uncover Police Surveillance Tech in California
Which surveillance technologies are California police using? Are they buying access to your location data? If so, how much are they paying? These are basic questions the Electronic Frontier Foundation is trying to answer in a new lawsuit called Pen-Link v. County of San Joaquin Sheriff’s Office.
EFF filed a motion in California Superior Court to join—or intervene in—an existing lawsuit to get access to documents we requested. The private company Pen-Link sued the San Joaquin Sheriff’s Office to block the agency from disclosing to EFF the unredacted contracts between them, claiming the information is a trade secret. We are going to court to make sure the public gets access to these records.
The public has a right to know the technology that law enforcement buys with taxpayer money. This information is not a trade secret, despite what private companies try to claim.
How did this case start?As part of EFF’s transparency mission, we sent public records requests to California law enforcement agencies—including the San Joaquin Sheriff’s Office—seeking information about law enforcements’ use of technology sold by two companies: Pen-Link and its subsidiary, Cobwebs Technologies.
The Sheriff’s Office gave us 40 pages of redacted documents. But at the request of Pen-Link, the Sheriff’s Office redacted the descriptions and prices of the products, services, and subscriptions offered by Pen-Link and Cobwebs.
Pen-Link then filed a lawsuit to permanently block the Sheriff’s Office from making the information public, claiming its prices and descriptions are trade secrets. Among other things, Pen-Link requires its law enforcement customers to sign non-disclosure agreements to not reveal use of the technology without the company’s consent. In addition to thwarting transparency, this raises serious questions about defendants’ rights to obtain discovery in criminal cases.
“Customer and End Users are prohibited from disclosing use of the Deliverables, names of Cobwebs' tools and technologies, the existence of this agreement or the relationship between Customers and End Users and Cobwebs to any third party, without the prior written consent of Cobwebs,” according to Cobwebs’ Terms.
Unfortunately, these kinds of terms are not new.
EFF is entering the lawsuit to make sure the records get released to the public. Pen-Link’s lawsuit is known as a “reverse” public records lawsuit because it seeks to block, rather than grant access to public records. It is a rare tool traditionally only used to protect a person’s constitutional right to privacy—not a business’ purported trade secrets. In addition to defending against the “reverse” public records lawsuit, we are asking the court to require the Sheriff’s Office to give us the un-redacted records.
Who is Pen-Link and Cobwebs Technologies?Pen-Link and its subsidiary Cobwebs Technologies are private companies that sell products and services to law enforcement. Pen-Link has been around for years and may be best known as a company that helps law enforcement execute wiretaps after a court grants approval. In 2023, Pen-Link acquired the company Cobwebs Technologies.
The redacted documents indicate that San Joaquin County was interested in Cobwebs’ “Web Intelligence Investigation Platform.” In other cases, this platform has included separate products like WebLoc, Tangles, or a “face processing subscription.” WebLoc is a platform that provides law enforcement with a vast amount of location data sourced from large data sets. Tangles uses AI to glean intelligence from the “open, deep and dark web.” Journalists at multiple news outlets have chronicled this technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. The company has also provided proxy social media accounts for undercover investigations, which led Meta to name it a surveillance-for-hire company and to delete hundreds of accounts associated with the platform. Cobwebs has had multiple high-value contracts with federal agencies like Immigration and Customs Enforcement (ICE) and the Internal Revenue Service (IRS) and state entities, like the Texas Department of Public Safety and the West Virginia Fusion Center. EFF classifies this type of product as a “Third Party Investigative Platform,” a category that we began documenting in the Atlas of Surveillance project earlier this year.
What’s next?Before EFF officially joins the case, the court must grant our motion, then we can file our petition and brief the case. A favorable ruling would grant the public access to these documents and show law enforcement contractors that they can’t hide their surveillance tech behind claims of trade secrets.
For communities to have informed conversations and make reasonable decisions about powerful surveillance tools being used by their governments, our right to information under public records laws must be honored. The costs and descriptions of government purchases are common data points, regularly subject to disclosure under public records laws.
Allowing PenLink to keep this information secret would dangerously diminish the public’s right to government transparency and help facilitate surveillance of U.S. residents. In the past, our public records work has exposed similar surveillance technology. In 2022, EFF produced a large exposé on Fog Data Science, the secretive company selling mass surveillance to local police.
The case number is STK-CV-UWM-0016425. Read more here:
EFF's Motion to Intervene
EFF's Points and Authorities
Trujillo Declaration & EFF's Cross-Petition
Pen-Link's Original Complaint
Redacted documents produced by County of San Joaquin Sheriff’s Office
Online Behavioral Ads Fuel the Surveillance Industry—Here’s How
A global spy tool exposed the locations of billions of people to anyone willing to pay. A Catholic group bought location data about gay dating app users in an effort to out gay priests. A location data broker sold lists of people who attended political protests.
What do these privacy violations have in common? They share a source of data that’s shockingly pervasive and unregulated: the technology powering nearly every ad you see online.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of.
What is Real-Time Bidding?RTB is the process used to select the targeted ads shown to you on nearly every website and app you visit. The ads you see are the winners of milliseconds-long auctions that expose your personal information to thousands of companies a day. Here’s how it works:
- The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company.
- The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on ad space.
- Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive the data. Indeed, anyone posing as an ad buyer can access a stream of sensitive data about the billions of individuals using websites or apps with targeted ads. That’s a big way that RTB puts personal data into the hands of data brokers, who sell it to basically anyone willing to pay. Although some ad auction companies have policies against selling bidstream data, the practice remains widespread.
RTB doesn’t just allow companies to harvest your data—it also incentivizes it. Bid requests containing more personal data attract higher bids, so websites and apps are financially motivated to harvest as much of your data as possible. RTB further incentivizes data brokers to track your online activity because advertisers purchase data from data brokers to inform their bidding decisions.
Data brokers don’t need any direct relationship with the apps and websites they’re collecting bidstream data from. While some data collection methods require web or app developers to install code from a data broker, RTB is facilitated by ad companies that are already plugged into most websites and apps. This allows data brokers to collect data at a staggering scale. Hundreds of billions of RTB bid requests are broadcast every day. For each of those bids, thousands of real or fake ad buying platforms may receive data. As a result, entire businesses have emerged to harvest and sell data from online advertising auctions.
First FTC Action Against Abuse of Real-Time Bidding DataA recent enforcement action by the Federal Trade Commission (FTC) shows that the dangers of RTB are not hypothetical—data brokers actively rely on RTB to collect and sell sensitive information. The FTC found that data broker Mobilewalla was collecting personal data—including precise location information—from RTB auctions without placing ads.
Mobilewalla collected data on over a billion people, with an estimated 60% sourced directly from RTB auctions. The company then sold this data for a range of invasive purposes, including tracking union organizers, tracking people at Black Lives Matter protests, and compiling home addresses of healthcare employees for recruitment by competing employers. It also categorized people into custom groups for advertisers, such as “pregnant women,” “Hispanic churchgoers,” and “members of the LGBTQ+ community.”
The FTC concluded that Mobilewalla's practice of collecting personal data from RTB auctions where they didn’t place ads violated the FTC Act’s prohibition of unfair conduct. The FTC’s proposed settlement order bans Mobilewalla from collecting consumer data from RTB auctions for any purposes other than participating in those auctions. This action marks the first time the FTC has targeted the abuse of bidstream data. While we celebrate this significant milestone, the dangers of RTB go far beyond one data broker.
Real-Time Bidding Enables Mass SurveillanceRTB is regularly exploited for government surveillance. As early as 2017, researchers demonstrated that $1,000 worth of ad targeting data could be used to track an individuals’ locations and glean sensitive information like their religion and sexual orientation. Since then, data brokers have been caught selling bidstream data to government intelligence agencies. For example, the data broker Near Intelligence collected data about more than a billion devices from RTB auctions and sold it to the U.S. Defense Department. Mobilewalla sold bidstream data to another data broker, Gravy Analytics, whose subsidiary, Venntell, likewise has sold location data to the FBI, ICE, CBP, and other government agencies.
In addition to buying raw bidstream data, governments buy surveillance tools that rely on the same advertising auctions. The surveillance company Rayzone posed as an advertiser to acquire bidstream data, which it repurposed into tracking tools sold to governments around the world. Rayzone’s tools could identify phones that had been in specific locations and link them to people's names, addresses, and browsing histories. Patternz, another surveillance tool built on bidstream data, was advertised to security agencies worldwide as a way to track people's locations. The CEO of Patternz highlighted the connection between surveillance and advertising technology when he suggested his company could track people through “virtually any app that has ads.”
Beyond the privacy harms from RTB-fueled government surveillance, RTB also creates national security risks. Researchers have warned that RTB could allow foreign states and non-state actors to obtain compromising personal data about American defense personnel and political leaders. In fact, Google’s ad auctions sent sensitive data to a Russian ad company for months after it was sanctioned by the U.S. Treasury.
The privacy and security dangers of RTB are inherent to its design, and not just a matter of misuse by individual data brokers. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. This indiscriminate sharing of location data and other personal information is dangerous, regardless of whether the recipients are advertisers or surveillance companies in disguise. Sharing sensitive data with advertisers enables exploitative advertising, such as predatory loan companies targeting people in financial distress. RTB is a surveillance system at its core, presenting corporations and governments with limitless opportunities to use our data against us.
How You Can Protect YourselfPrivacy-invasive ad auctions occur on nearly every website and app, but there are steps you can take to protect yourself:
- For apps: Follow EFF’s instructions to disable your mobile advertising ID and audit app permissions. These steps will reduce the personal data available to the RTB process and make it harder for data brokers to create detailed profiles about you.
- For websites: Install Privacy Badger, a free browser extension built by EFF to block online trackers. Privacy Badger automatically blocks tracking-enabled advertisements, preventing the RTB process from beginning.
These measures will help protect your privacy, but advertisers are constantly finding new ways to collect and exploit your data. This is just one more reason why individuals shouldn’t bear the sole responsibility of defending their data every time they use the internet.
The Real Solution: Ban Online Behavioral AdvertisingThe best way to prevent online ads from fueling surveillance is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. It would also prevent your personal data from being broadcast to data brokers through RTB auctions. Ads could still be targeted contextually—based on the content of the page you’re currently viewing—without collecting or exposing sensitive information about you. This shift would not only protect individual privacy but also reduce the power of the surveillance industry. Seeing an ad shouldn’t mean surrendering your data to thousands of companies you’ve never heard of. It’s time to end online behavioral advertising and the mass surveillance it enables.