EFF: Updates
Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton
The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.
The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.
The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users’ First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.
1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why:
While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.
First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.
Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.
The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.
Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway.
Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.
Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.
2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare.Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.
There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.
The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy.
3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography.
It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media.
4. The Supreme Court Has Rightly Struck Down Similar Laws Before.In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.
Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear.
The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web.
Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear.
5. There is No Safe, Privacy Protecting Age-Verification Technology.The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act.
The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law
Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights.
For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review.
Meta’s New Content Policy Will Harm Vulnerable Users. If It Really Valued Free Speech, It Would Make These Changes
Earlier this week, when Meta announced changes to their content moderation processes, we were hopeful that some of those changes—which we will address in more detail in this post—would enable greater freedom of expression on the company’s platforms, something for which we have advocated for many years. While Meta’s initial announcement primarily addressed changes to its misinformation policies and included rolling back over-enforcement and automated tools that we have long criticized, we expressed hope that “Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ+ speech, political dissidence, and sex work.”
Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy.
However, shortly after our initial statement was published, we became aware that rather than addressing those historically over-moderated subjects, Meta was taking the opposite tack and —as reported by the Independent—was making targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups.
It was our mistake to formulate our responses and expectations on what is essentially a marketing video for upcoming policy changes before any of those changes were reflected in their documentation. We prefer to focus on the actual impacts of online censorship felt by people, which tends to be further removed from the stated policies outlined in community guidelines and terms of service documents. Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy. These first changes to actually surface in Facebook's community standards document seem to be in the same vein.
Specifically, Meta’s hateful conduct policy now contains the following:
- People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech.
But the implementation of this policy shows that it is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging the legitimacy of LGBTQ+ rights. For example,
- While allegations of mental illness against people based on their protected characteristics remain a tier 2 violation, the revised policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism [sic] and homosexuality.”
- The revised policy now specifies that Meta allows speech advocating gender-based and sexual orientation-based-exclusion from military, law enforcement, and teaching jobs, and from sports leagues and bathrooms.
- The revised policy also removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics.
These changes reveal that Meta seems less interested in freedom of expression as a principle and more focused on appeasing the incoming U.S. administration, a concern we mentioned in our initial statement with respect to the announced move of the content policy team from California to Texas to address “appearances of bias.” Meta said it would be making some changes to reflect that these topics are “the subject of frequent political discourse and debate” and can be said “on TV or the floor of Congress.” But if that is truly Meta’s new standard, we are struck by how selectively it is being rolled out, and particularly allowing more anti-LGBTQ+ speech.
We continue to stand firmly against hateful anti-trans content remaining on Meta’s platforms, and strongly condemn any policy change directly aimed at enabling hate toward vulnerable communities—both in the U.S. and internationally.
Real and Sincere Reforms to Content Moderation Can Both Promote Freedom of Expression and Protect Marginalized UsersIn its initial announcement, Meta also said it would change how policies are enforced to reduce mistakes, stop reliance on automated systems to flag every piece of content, and add staff to review appeals. We believe that, in theory, these are positive measures that should result in less censorship of expression for which Meta has long been criticized by the global digital rights community, as well as by artists, sex worker advocacy groups, LGBTQ+ advocates, Palestine advocates, and political groups, among others.
But we are aware that these problems, at a corporation with a history of biased and harmful moderation like Meta, need a careful, well-thought-out, and sincere fix that will not undermine broader freedom of expression goals.
For more than a decade, EFF has been critical of the impact that content moderation at scale—and automated content moderation in particular—has on various groups. If Meta is truly interested in promoting freedom of expression across its platforms, we renew our calls to prioritize the following much-needed improvements instead of allowing more hateful speech.
Meta Must Invest in Its Global User Base and Cover More LanguagesMeta has long failed to invest in providing cultural and linguistic competence in its moderation practices often leading to inaccurate removal of content as well as a greater reliance on (faulty) automation tools. This has been apparent to us for a long time. In the wake of the 2011 Arab uprisings, we documented our concerns with Facebook’s reporting processes and their effect on activists in the Middle East and North Africa. More recently, the need for cultural competence in the industry generally was emphasized in the revised Santa Clara Principles.
Over the years, Meta’s global shortcomings became even more apparent as its platforms were used to promote hate and extremism in a number of locales. One key example is the platform’s failure to moderate anti-Rohingya sentiment in Myanmar—the direct result of having far too few Burmese-speaking moderators (in 2015, as extreme violence and violent sentiment toward the Rohingya was well underway, there were just two such moderators).
If Meta is indeed going to roll back the use of automation to flag and action most content and ensure that appeals systems work effectively, which will solve some of these problems, it must also invest globally in qualified content moderation personnel to make sure that content from countries outside of the United States and in languages other than English is fairly moderated.
Reliance on Automation to Flag Extremist Content Allows for Flawed ModerationWe have long been critical of Meta’s over-enforcement of terrorist and extremist speech, specifically of the impact it has on human rights content. Part of the problem is Meta’s over-reliance on moderation to flag extremist content. A 2020 document reviewing moderation across the Middle East and North Africa claimed that algorithms used to detect terrorist content in Arabic incorrectly flag posts 77 percent of the time.
More recently, we have seen this with Meta’s automated moderation to remove the phrase “from the river to the sea.” As we argued in a submission to the Oversight Board—with which the Board also agreed—moderation decisions must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.
Another example of this problem that has overlapped with Meta’s shortcomings with respect to linguistic competence is in relation to the term “shaheed,” which translates most closely to “martyr” and is used by Arabic speakers and many non-Arabic-speaking Muslims elsewhere in the world to refer primarily (though not exclusively) to individuals who have died in the pursuit of ideological causes. As we argued in our joint submission with ECNL to the Meta Oversight Board, use of the term is context-dependent, but Meta has used automated moderation to indiscriminately remove instances of the word. In their policy advisory opinion, the Oversight Board noted that any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”
Marginalized communities that experience persecution offline often face disproportionate censorship online. It is imperative that Meta recognize the responsibilities it has to its global user base in upholding free expression, particularly of communities that may otherwise face censorship in their home countries.
Sexually-Themed Content Remains Subject to Discriminatory Over-censorshipOur critique of Meta’s removal of sexually-themed content goes back more than a decade. The company’s policies on adult sexual activity and nudity affect a wide range of people and communities, but most acutely impact LGBTQ+ individuals and sex workers. Typically aimed at keeping sites “family friendly” or “protecting the children,” these policies are often unevenly enforced, often classifying LGBTQ+ content as “adult” or “harmful” when similar heterosexual content isn’t. These policies were often written and enforced discriminatorily and at the expense of gender-fluid and nonbinary speakers—we joined in the We the Nipple campaign aimed at remedying this discrimination.
In the midst of ongoing political divisions, issues like this have a serious impact on social media users.
Most nude content is legal, and engaging with such material online provides individuals with a safe and open framework to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With Meta intervening to become the arbiters of how people create and engage with nudity and sexuality—both offline and in the digital space—a crucial form of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down.
Over-removal of Abortion Content Stifles User Access to Essential InformationThe removal of abortion-related posts on Meta platforms containing the word ‘kill’ have failed to meet the criteria for restricting users’ right to freedom of expression. Meta has regularly over-removed abortion related content, hamstringing its user’s ability to voice their political beliefs. The use of automated tools for content moderation leads to the biased removal of this language, as well as essential information. In 2022, Vice reported that a Facebook post stating "abortion pills can be mailed" was flagged within seconds of it being posted.
At a time when bills are being tabled across the U.S. to restrict the exchange of abortion-related information online, reproductive justice and safe access to abortion, like so many other aspects of managing our healthcare, is fundamentally tied to our digital lives. And with corporations deciding what content is hosted online, the impact of this removal is exacerbated.
What was benign data online is effectively now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. Meta must adhere to its responsibility to respect international human rights law, and ensure that any abortion-related content removal be both necessary and proportionate.
Meta’s symbolic move of its content team from California to Texas, a state that is aiming to make the distribution of abortion information illegal, also raises serious concerns that Meta will backslide on this issue—in line with local Texan state law banning abortion—rather than make improvements.
Meta Must Do Better to Provide Users With TransparencyEFF has been critical of Facebook’s lack of transparency for a long time. When it comes to content moderation the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.
Meta makes billions from its own exploitation of our data, too often choosing their profits over our privacy—opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of the corporation’s harms—that its core business model depends on collecting as much information about users as possible, then using that data to target ads, as well as target competitors.
That’s why EFF, with others, launched the Santa Clara Principles on how corporations like Meta can best obtain meaningful transparency and accountability around the increasingly aggressive moderation of user-generated content. And as platforms like Facebook, Instagram, and X continue to occupy an even bigger role in arbitrating our speech and controlling our data, there is an increased urgency to ensure that their reach is not only stifled, but reduced.
Flawed Approach to Moderating Misinformation with CensorshipMisinformation has been thriving on social media platforms, including Meta. As we said in our initial statement, and have written before, Meta and other platforms should use a variety of fact-checking and verification tools available to it, including both community notes and professional fact-checkers, and have robust systems in place to check against any flagging that results from it.
Meta and other platforms should also employ media literacy tools such as encouraging users to read articles before sharing them, and to provide resources to help their users assess reliability of information on the site. We have also called for Meta and others to stop privileging governmental officials by providing them with greater opportunities to lie than other users.
While we expressed some hope on Tuesday, the cynicism expressed by others seems warranted now. Over the years, EFF and many others have worked to push Meta to make improvements. We've had some success with its "Real Names" policy, for example, which disproportionately affected the LGBTQ community and political dissidents. We also fought for, and won improvements on, Meta's policy on allowing images of breastfeeding, rather than marking them as "sexual content." If Meta truly values freedom of expression, we urge it to redirect its focus to empowering historically marginalized speakers, rather than empowering only their detractors.
EFF Statement on Meta's Announcement of Revisions to Its Content Moderation Processes
In general, EFF supports moves that bring more freedom of expression and transparency to platforms—regardless of their political motivation. We’re encouraged by Meta's recognition that automated flagging and responses to flagged content have caused all sorts of mistakes in moderation. Just this week, it was reported that some of those "mistakes" were heavily censoring LGBTQ+ content. We sincerely hope that the lightened restrictions announced by Meta will apply uniformly, and not just to hot-button U.S. political topics.
Censorship, broadly, is not the answer to misinformation. We encourage social media companies to employ a variety of non-censorship tools to address problematic speech on their platforms and fact-checking can be one of those tools. Community notes, essentially crowd-sourced fact-checking, can be a very valuable tool for addressing misinformation and potentially give greater control to users. But fact-checking by professional organizations with ready access to subject-matter expertise can be another. This has proved especially true in international contexts where they have been instrumental in refuting, for example, genocide denial.
So, even if Meta is changing how it uses and preferences fact-checking entities, we hope that Meta will continue to look to fact-checking entities as an available tool. Meta does not have to, and should not, choose one system to the exclusion of the other.
Importantly, misinformation is only one of many content moderation challenges facing Meta and other social media companies. We hope Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ speech, political dissidence, and sex work.
Meta’s decision to move its content teams from California to “help reduce the concern that biased employees are overly censoring content” seems more political than practical. There is of course no population that is inherently free from bias and by moving to Texas, the “concern” will likely not be reduced, but just relocated from perceived “California bias” to perceived “Texas bias.”
Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in the United States. We applaud Meta’s efforts to try to fix its over-censorship problem but will watch closely to make sure it is a good-faith effort and rolled out fairly and not merely a political maneuver to accommodate the upcoming U.S. administration change.
Sixth Circuit Rules Against Net Neutrality; EFF Will Continue to Fight
Last week, the Sixth U.S. Circuit Court of Appeals ruled against the FCC, rejecting its authority to classify broadband as a Title II “telecommunications service.” In doing so, the court removed net neutrality protections for all Americans and took away the FCC’s ability to meaningfully regulate internet service providers.
This ruling fundamentally gets wrong the reality of internet service we all live with every day. Nearly 80% of Americans view broadband access to be as important as water and electricity. It is no longer an extra, non-necessary “information service,” as it was seen 40 years ago, but it is a vital medium of communication in everyday life. Business, health services, education, entertainment, our social lives, and more have increasingly moved online. By ruling that broadband “information service” and not a “telecommunications service” this court is saying that the ISPs that control your broadband access will continue to face little to no oversight for their actions.
This is intolerable.
Net neutrality is the principle that ISPs treat all data that travels over their network equally, without improper discrimination in favor of particular apps, sites, or services. At its core, net neutrality is a principle of equity and protector of innovation—that, at least online, large monopolistic ISPs don’t get to determine winners and losers. Net neutrality ensures that users determine their online experience, not ISPs. As such, it is fundamental to user choice, access to information, and free expression online.
By removing protections against actions like blocking, throttling, and paid prioritization, the court gives those willing and able to pay ISPs an advantage over those who are not. It privileges large legacy corporations that have partnerships with the big ISPs, and it means that newer, smaller, or niche services will have trouble competing, even if they offer a superior service. It means that ISPs can throttle your service–or that of, say, a fire department fighting the largest wildfire in state history. They can block a service they don’t like. In addition to charging you for access to the internet, they can charge services and websites for access to you, artificially driving up costs. And where most Americans have little choice in home broadband providers, it means these ISPs will be able to exercise their monopoly power not just on the price you pay for access, but how you access and engage with information as well.
Moving forward, now more than ever it becomes important for individual states to pass their own net neutrality laws, or defend the ones they have on the books. California passed a gold standard net neutrality law in 2018 that has survived judicial scrutiny. It is up to us to ensure it remains in place.
Congress can also end this endless whiplash of reclassification and decide, once and for all, by passing a law classifying broadband internet services firmly under Title II. Such proposals have been introduced before; they ought to be introduced again.
This is a bad ruling for Team Internet, but we are resilient. EFF–standing with users, innovators, creators, public interest advocates, librarians, educators, and everyone else who relies on the open internet–will continue to champion the principles of net neutrality and work toward an equitable and open internet for all.
Last Call: The Combined Federal Campaign Pledge Period Closes on January 15!
The pledge period for the Combined Federal Campaign (CFC) closes on Wednesday, January 15! If you're a U.S. federal employee or retiree, now is the time to make your pledge and support EFF’s work to protect your rights online.
If you haven’t before, giving to EFF through the CFC is quick and easy! Just head on over to GiveCFC.org and click “DONATE.” Then you can search for EFF using our CFC ID 10437 and make a pledge via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can also choose to increase your support there as well!
The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year members of this community raised nearly $34,000 to support EFF’s initiatives advocating for privacy and free expression online. That support has helped us:
- Fight for the public's right to access police drone footage
- Encourage the Fifth Circuit Court of Appeals to rule that location-based geofence warrants are unconstitutional
- Push back against countless censorship laws, including the Kids Online Safety Act
- Continue to see more of the web encrypted thanks to Certbot and Let's Encrypt
Federal employees and retirees have a tremendous impact on our democracy and the future of civil liberties and human rights online. By making a pledge through the CFC, you can shape a future where your privacy and free speech rights are protected. Make your pledge today using EFF’s CFC ID 10437!
EFF Goes to Court to Uncover Police Surveillance Tech in California
Which surveillance technologies are California police using? Are they buying access to your location data? If so, how much are they paying? These are basic questions the Electronic Frontier Foundation is trying to answer in a new lawsuit called Pen-Link v. County of San Joaquin Sheriff’s Office.
EFF filed a motion in California Superior Court to join—or intervene in—an existing lawsuit to get access to documents we requested. The private company Pen-Link sued the San Joaquin Sheriff’s Office to block the agency from disclosing to EFF the unredacted contracts between them, claiming the information is a trade secret. We are going to court to make sure the public gets access to these records.
The public has a right to know the technology that law enforcement buys with taxpayer money. This information is not a trade secret, despite what private companies try to claim.
How did this case start?As part of EFF’s transparency mission, we sent public records requests to California law enforcement agencies—including the San Joaquin Sheriff’s Office—seeking information about law enforcements’ use of technology sold by two companies: Pen-Link and its subsidiary, Cobwebs Technologies.
The Sheriff’s Office gave us 40 pages of redacted documents. But at the request of Pen-Link, the Sheriff’s Office redacted the descriptions and prices of the products, services, and subscriptions offered by Pen-Link and Cobwebs.
Pen-Link then filed a lawsuit to permanently block the Sheriff’s Office from making the information public, claiming its prices and descriptions are trade secrets. Among other things, Pen-Link requires its law enforcement customers to sign non-disclosure agreements to not reveal use of the technology without the company’s consent. In addition to thwarting transparency, this raises serious questions about defendants’ rights to obtain discovery in criminal cases.
“Customer and End Users are prohibited from disclosing use of the Deliverables, names of Cobwebs' tools and technologies, the existence of this agreement or the relationship between Customers and End Users and Cobwebs to any third party, without the prior written consent of Cobwebs,” according to Cobwebs’ Terms.
Unfortunately, these kinds of terms are not new.
EFF is entering the lawsuit to make sure the records get released to the public. Pen-Link’s lawsuit is known as a “reverse” public records lawsuit because it seeks to block, rather than grant access to public records. It is a rare tool traditionally only used to protect a person’s constitutional right to privacy—not a business’ purported trade secrets. In addition to defending against the “reverse” public records lawsuit, we are asking the court to require the Sheriff’s Office to give us the un-redacted records.
Who is Pen-Link and Cobwebs Technologies?Pen-Link and its subsidiary Cobwebs Technologies are private companies that sell products and services to law enforcement. Pen-Link has been around for years and may be best known as a company that helps law enforcement execute wiretaps after a court grants approval. In 2023, Pen-Link acquired the company Cobwebs Technologies.
The redacted documents indicate that San Joaquin County was interested in Cobwebs’ “Web Intelligence Investigation Platform.” In other cases, this platform has included separate products like WebLoc, Tangles, or a “face processing subscription.” WebLoc is a platform that provides law enforcement with a vast amount of location data sourced from large data sets. Tangles uses AI to glean intelligence from the “open, deep and dark web.” Journalists at multiple news outlets have chronicled this technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. The company has also provided proxy social media accounts for undercover investigations, which led Meta to name it a surveillance-for-hire company and to delete hundreds of accounts associated with the platform. Cobwebs has had multiple high-value contracts with federal agencies like Immigration and Customs Enforcement (ICE) and the Internal Revenue Service (IRS) and state entities, like the Texas Department of Public Safety and the West Virginia Fusion Center. EFF classifies this type of product as a “Third Party Investigative Platform,” a category that we began documenting in the Atlas of Surveillance project earlier this year.
What’s next?Before EFF officially joins the case, the court must grant our motion, then we can file our petition and brief the case. A favorable ruling would grant the public access to these documents and show law enforcement contractors that they can’t hide their surveillance tech behind claims of trade secrets.
For communities to have informed conversations and make reasonable decisions about powerful surveillance tools being used by their governments, our right to information under public records laws must be honored. The costs and descriptions of government purchases are common data points, regularly subject to disclosure under public records laws.
Allowing PenLink to keep this information secret would dangerously diminish the public’s right to government transparency and help facilitate surveillance of U.S. residents. In the past, our public records work has exposed similar surveillance technology. In 2022, EFF produced a large exposé on Fog Data Science, the secretive company selling mass surveillance to local police.
The case number is STK-CV-UWM-0016425. Read more here:
EFF's Motion to Intervene
EFF's Points and Authorities
Trujillo Declaration & EFF's Cross-Petition
Pen-Link's Original Complaint
Redacted documents produced by County of San Joaquin Sheriff’s Office
Online Behavioral Ads Fuel the Surveillance Industry—Here’s How
A global spy tool exposed the locations of billions of people to anyone willing to pay. A Catholic group bought location data about gay dating app users in an effort to out gay priests. A location data broker sold lists of people who attended political protests.
What do these privacy violations have in common? They share a source of data that’s shockingly pervasive and unregulated: the technology powering nearly every ad you see online.
Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of.
What is Real-Time Bidding?RTB is the process used to select the targeted ads shown to you on nearly every website and app you visit. The ads you see are the winners of milliseconds-long auctions that expose your personal information to thousands of companies a day. Here’s how it works:
- The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company.
- The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on ad space.
- Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive the data. Indeed, anyone posing as an ad buyer can access a stream of sensitive data about the billions of individuals using websites or apps with targeted ads. That’s a big way that RTB puts personal data into the hands of data brokers, who sell it to basically anyone willing to pay. Although some ad auction companies have policies against selling bidstream data, the practice remains widespread.
RTB doesn’t just allow companies to harvest your data—it also incentivizes it. Bid requests containing more personal data attract higher bids, so websites and apps are financially motivated to harvest as much of your data as possible. RTB further incentivizes data brokers to track your online activity because advertisers purchase data from data brokers to inform their bidding decisions.
Data brokers don’t need any direct relationship with the apps and websites they’re collecting bidstream data from. While some data collection methods require web or app developers to install code from a data broker, RTB is facilitated by ad companies that are already plugged into most websites and apps. This allows data brokers to collect data at a staggering scale. Hundreds of billions of RTB bid requests are broadcast every day. For each of those bids, thousands of real or fake ad buying platforms may receive data. As a result, entire businesses have emerged to harvest and sell data from online advertising auctions.
First FTC Action Against Abuse of Real-Time Bidding DataA recent enforcement action by the Federal Trade Commission (FTC) shows that the dangers of RTB are not hypothetical—data brokers actively rely on RTB to collect and sell sensitive information. The FTC found that data broker Mobilewalla was collecting personal data—including precise location information—from RTB auctions without placing ads.
Mobilewalla collected data on over a billion people, with an estimated 60% sourced directly from RTB auctions. The company then sold this data for a range of invasive purposes, including tracking union organizers, tracking people at Black Lives Matter protests, and compiling home addresses of healthcare employees for recruitment by competing employers. It also categorized people into custom groups for advertisers, such as “pregnant women,” “Hispanic churchgoers,” and “members of the LGBTQ+ community.”
The FTC concluded that Mobilewalla's practice of collecting personal data from RTB auctions where they didn’t place ads violated the FTC Act’s prohibition of unfair conduct. The FTC’s proposed settlement order bans Mobilewalla from collecting consumer data from RTB auctions for any purposes other than participating in those auctions. This action marks the first time the FTC has targeted the abuse of bidstream data. While we celebrate this significant milestone, the dangers of RTB go far beyond one data broker.
Real-Time Bidding Enables Mass SurveillanceRTB is regularly exploited for government surveillance. As early as 2017, researchers demonstrated that $1,000 worth of ad targeting data could be used to track an individuals’ locations and glean sensitive information like their religion and sexual orientation. Since then, data brokers have been caught selling bidstream data to government intelligence agencies. For example, the data broker Near Intelligence collected data about more than a billion devices from RTB auctions and sold it to the U.S. Defense Department. Mobilewalla sold bidstream data to another data broker, Gravy Analytics, whose subsidiary, Venntell, likewise has sold location data to the FBI, ICE, CBP, and other government agencies.
In addition to buying raw bidstream data, governments buy surveillance tools that rely on the same advertising auctions. The surveillance company Rayzone posed as an advertiser to acquire bidstream data, which it repurposed into tracking tools sold to governments around the world. Rayzone’s tools could identify phones that had been in specific locations and link them to people's names, addresses, and browsing histories. Patternz, another surveillance tool built on bidstream data, was advertised to security agencies worldwide as a way to track people's locations. The CEO of Patternz highlighted the connection between surveillance and advertising technology when he suggested his company could track people through “virtually any app that has ads.”
Beyond the privacy harms from RTB-fueled government surveillance, RTB also creates national security risks. Researchers have warned that RTB could allow foreign states and non-state actors to obtain compromising personal data about American defense personnel and political leaders. In fact, Google’s ad auctions sent sensitive data to a Russian ad company for months after it was sanctioned by the U.S. Treasury.
The privacy and security dangers of RTB are inherent to its design, and not just a matter of misuse by individual data brokers. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. This indiscriminate sharing of location data and other personal information is dangerous, regardless of whether the recipients are advertisers or surveillance companies in disguise. Sharing sensitive data with advertisers enables exploitative advertising, such as predatory loan companies targeting people in financial distress. RTB is a surveillance system at its core, presenting corporations and governments with limitless opportunities to use our data against us.
How You Can Protect YourselfPrivacy-invasive ad auctions occur on nearly every website and app, but there are steps you can take to protect yourself:
- For apps: Follow EFF’s instructions to disable your mobile advertising ID and audit app permissions. These steps will reduce the personal data available to the RTB process and make it harder for data brokers to create detailed profiles about you.
- For websites: Install Privacy Badger, a free browser extension built by EFF to block online trackers. Privacy Badger automatically blocks tracking-enabled advertisements, preventing the RTB process from beginning.
These measures will help protect your privacy, but advertisers are constantly finding new ways to collect and exploit your data. This is just one more reason why individuals shouldn’t bear the sole responsibility of defending their data every time they use the internet.
The Real Solution: Ban Online Behavioral AdvertisingThe best way to prevent online ads from fueling surveillance is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. It would also prevent your personal data from being broadcast to data brokers through RTB auctions. Ads could still be targeted contextually—based on the content of the page you’re currently viewing—without collecting or exposing sensitive information about you. This shift would not only protect individual privacy but also reduce the power of the surveillance industry. Seeing an ad shouldn’t mean surrendering your data to thousands of companies you’ve never heard of. It’s time to end online behavioral advertising and the mass surveillance it enables.
Decentralization Reaches a Turning Point: 2024 in review
The steady rise of decentralized networks this year is transforming social media. Platforms like Mastodon, Bluesky, and Threads are still in their infancy but have already shown that when users are given options, innovation thrives and it results in better tools and protections for our rights online. By moving towards a digital landscape that can’t be monopolized by one big player, we also see broader improvements to network resiliency and user autonomy.
The Steady Rise of Decentralized Networks Fediverse and ThreadsThe Fediverse, a wide variety of sites and services most associated with Mastodon, continued to evolve this year. Meta’s Threads began integrating with the network, marking a groundbreaking shift for the company. Only a few years ago EFF dreamed of the impact an embrace of interoperability would have for a company that is notorious for building walled gardens that trap users within its platforms. By allowing Threads users to share their posts with Mastodon and the broader fediverse (and therefore, Bluesky) without leaving their home platform, Meta is introducing millions to the benefits of interoperability. We look forward to this continued trajectory, and for a day when it is easy to move to or from Threads, and still follow and interact with the same federated community.
Threads’ enormous user base—100 million daily active users—now dwarfs both Mastodon and Bluesky. Its integration into more open networks is a potential turning point in popularizing the decentralized social web. However, Meta’s poor reputation on privacy, moderation, and censorship, drove many Fediverse instances to preemptively block Threads, and may fragment the network..
We explored how Threads stacks up against Mastodon and Bluesky, across moderation, user autonomy, and privacy. This development highlights the promise of decentralization, but it also serves as a reminder that corporate giants may still wield outsized influence over ostensibly open systems.
Bluesky’s Explosive GrowthWhile Threads dominated in sheer numbers, Bluesky was this year’s breakout star. At the start of the year, Bluesky had fewer than 200,000 users and was still invite-only. In the last few months of 2024 however the project experienced over 500% growth in just one month, and ultimately reached over 25 million users.
Unlike Mastodon, which integrates into the Fediverse, Bluesky took a different path, building its own decentralized protocol (AT Protocol) to ensure user data and identities remain portable and users retain a “credible exit.” This innovation allows users to carry their online communities across platforms seamlessly, sparing them the frustration of rebuilding their community. Unlike the Fediverse, Bluesky has prioritized building a drop-in replacement for Twitter, and is still mostly centralized. Bluesky has a growing arsenal of tools available to users, embracing community creativity and innovation.
While Bluesky will be mostly familiar to former Twitter users, we ran through some tips for managing your Bluesky feed, and answered some questions for people just joining the platform.
Competition Matters Keeping the Internet WeirdThe rise of decentralized platforms underscores the critical importance of competition in driving innovation. Platforms like Mastodon and Bluesky thrive because they fill gaps left by corporate giants, and encourage users to find experiences which work best for them. The traditional social media model puts up barriers so platforms can impose restrictive policies and prioritize profit over user experience. When the focus shifts to competition and a lack of central control, the internet flourishes.
Whether a user wants the community focus of Mastodon, the global megaphone of Bluesky, or something else entirely, smaller platforms let people build experiences independent of the motives of larger companies. Decentralized platforms are ultimately most accountable to their users, not advertisers or shareholders.
Making Tech ResilientThis year highlighted the dangers of concentrating too much power in the hands of a few dominant companies. A major global IT outage this summer starkly demonstrated the fragility of digital monocultures, where a single point of failure can disrupt entire industries. These failures underscore the importance of decentralization, where networks are designed to distribute risk, ensuring that no single system compromise can ripple across the globe.
Decentralized projects like Meshtastic, which uses radio waves to provide internet connectivity in disaster scenarios, exemplify the kind of resilient infrastructure we need. However, even these innovations face threats from private interests. This year, a proposal from NextNav to claim the 900 MHz band for its own use put Meshtastic’s experimentation—and by extension, the broader potential of decentralized communication—at risk. As we discussed in our FCC comments, such moves illustrate how monopolistic power not only stifles competition but also jeopardizes resilient tools that could safeguard peoples' connectivity.
Looking AheadThis year saw meaningful strides toward building a decentralized, creative, and resilient internet for 2025. Interoperability and decentralization will likely continue to expand. As it does, EFF will be vigilant, watching for threats to decentralized projects and obstacles to the growth of open ecosystems.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Deepening Government Use of AI and E-Government Transition in Latin America: 2024 in Review
Policies aimed at fostering digital government processes are gaining traction in Latin America, at local and regional levels. While these initiatives can streamline access to public services, it can also make them less accessible, less clear, and put people's fundamental rights at risk. As we move forward, we must emphasize transparency and privacy guarantees during government digital transition processes.
In November, the Ninth Ministerial Conference on the Information Society in Latin America and the Caribbean approved the 2026 Digital Agenda for the region (eLAC 2026). This initiative unfolds within the UN Economic Commission for Latin America and the Caribbean (ECLAC), a regional cooperation forum focused on furthering the economic development of LAC countries.
One of the thematic pillars of eLAC 2026 is the digital transformation of the State, including the digitalization of government processes and services to improve efficiency, transparency, citizen participation, and accountability. The digital agenda also aims to improve digital identity systems to facilitate access to public services and promote cross-border digital services in a framework of regional integration. In this context, the agenda points out countries’ willingness to implement policies that foster information-sharing, ensuring privacy, security, and interoperability in government digital systems, with the goal of using and harnessing data for decision-making, policy design and governance.
This regional process reflects and feeds country-level initiatives that have also gained steam in Latin America in the last few years. The incentives to government digital transformation take shape against the backdrop of improving government efficiency. It is critical to qualify what efficiency means in practice. Often “efficiency” has meant budget cuts or shrinking access to public processes and benefits at the expense of fundamental rights. The promotion of fundamental rights should guide a State’s metrics as to what is efficient and successful.
As such, while digitalization can play an important role in streamlining access to public services and facilitating the enjoyment of rights, it can also make it more complex for people to access these same services and generally interact with the State. The most vulnerable are those in greater need for that interaction to work well and those with an unusual context that often is not accommodated by the technology being used. They are also the population most prone to having scarce access to digital technologies and limited digital skills.
In addition, whereas properly integrating digital technologies into government processes and routines carries the potential to enhance transparency and civic participation, this is not a guaranteed outcome. It requires government willingness and policies oriented to these goals. Otherwise, digitalization can turn into an additional layer of complexity and distance between citizens and the State. Improving transparency and participation involves conceiving people not only as users of government services, but as participants in the design and implementation of public polices, which includes the ones related to States’ digital transition.
Leveraging digital identity and data-interoperability systems are generally treated as a natural part of government digitalization plans. Yet, they should be taken with care. As we have highlighted, effective and robust data privacy safeguards do not necessarily come along with states’ investments in implementing these systems, despite the fact they can be expanded into a potential regime of unprecedented data tracking. Among other recommendations and redlines, it’s crucial to support each person’s right to choose to continue using physical documentation instead of going digital.
This set of concerns stresses the importance of having an underlying institutional and normative structure to uphold fundamental rights within digital transition processes. Such a structure involves solid transparency and data privacy guarantees backed by equipped and empowered oversight authorities. Still, States often neglect the crucial role of that combination. In 2024, Mexico brought us a notorious example of that. Right when the new Mexican government has taken steps to advance the country’s digital transformation, it has also moved forward to close key independent oversight authorities, like the National Institute for Transparency, Access to Information and Personal Data Protection (INAI).
AI strategies approved in different Latin American countries show how fostering government use of AI is an important lever to AI national plans and a component of government digitalization processes.
In October 2024, Costa Rica was the first Central American country to launch an AI strategy. One of the strategic axes, named as "Smart Government", focuses on promoting the use of AI in the public sector. The document highlights that by incorporating emerging technologies in public administration, it will be possible to optimize decision making and automate bureaucratic tasks. It also envisions the provision of personalized services to citizens, according to their specific needs. This process includes not only the automation of public services, but also the creation of smart platforms to allow a more direct interaction between citizens and government.
In turn, Brazil has updated its AI strategy and published in July the AI Plan 2024-2028. One of the axes focuses on the use of AI to improve public services. The Brazilian plan also indicates the personalization of public services by offering citizens content that is contextual, targeted, and proactive. It involves state data infrastructures and the implementation of data interoperability among government institutions. Some of the AI-based projects proposed in the plan include developing early detection of neurodegenerative diseases and a "predict and protect" system to assess the school or university trajectory of students.
Each of these actions may have potential benefits, but also come with major challenges and risks to human rights. These involve the massive amount of personal data, including sensitive data, that those systems may process and cross-reference to provide personalized services, potential biases and disproportionate data processing in risk assessment systems, as well as incentives towards a problematic assumption that automation can replace human-to-human interaction between governments and their population. Choices about how to collect data and which technologies to adopt are ultimately political, although they are generally treated as technical and distant from political discussion.
An important basic step relates to government transparency about the AI systems either in use by public institutions or part of pilot programs. Transparency that, at a minimum, should range from actively informing people that these systems exist, with critical details on their design and operation, to qualified information and indicators about their results and impacts.
Despite the increasing adoption of algorithmic systems by public bodies in Latin America (for instance, a 2023's research mapped 113 government ADM systems in use in Colombia), robust transparency initiatives are only in its infancy. Chile stands out in that regard with its repository of public algorithms, while Brazil launched the Brazilian AI Observatory (OBIA) in 2024. Similar to the regional ILIA (Latin American Artificial Intelligence Index), OBIA features meaningful data to measure the state of adoption and development of AI systems in Brazil but still doesn't contain detailed information about AI-based systems in use by government entities.
The most challenging and controversial application from a human-rights and accountability standpoint is government use of AI in security-related activities.
During 2024, Argentina's new administration, under President Javier Milei, passed a set of acts regulating its police forces' cyber and AI surveillance capacities. One of them, issued in May, stipulates how police forces must conduct "cyberpatrolling", or Open-Source Intelligence (OSINT), for preventing crimes. OSINT activities do not necessarily entail the use of AI, but have increasingly integrated AI models as they facilitate the analysis of huge amounts of data. While OSINT has important and legitimate uses, including for investigative journalism, its application for government surveillance purposes has raised many concerns and led to abuses.
Another regulation issued in July created the "Unit of Artificial Intelligence Applied to Security" (UIAAS). The powers of the new agency include “patrolling open social networks, applications and Internet sites” as well as “using machine learning algorithms to analyze historical crime data and thus predict future crimes”. Civil society organizations in Argentina, such as Observatorio de Derecho Informático Argentino, Fundación Vía Libre, and Access Now, have gone to courts to enforce their right to access information about the new unit created.
The persistent opacity and lack of effective remedies to abuses in government use of digital surveillance technologies in the region called action from the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights (IACHR). The Office of the Special Rapporteur carried out a consultation to receive inputs about digital-powered surveillance abuses, the state of digital surveillance legislation, the reach of the private surveillance market in the region, transparency and accountability challenges, as well as gaps and best-practice recommendations. EFF has joined expert interviews and submitted comments in the consultation process. The final report will be published next year with important analysis and recommendations.
Considering this broader context of challenges, we launched a comprehensive report on the application of Inter-American Human Rights Standards to government use of algorithmic systems for rights-based determinations. Delving into Inter-American Court decisions and IACHR reports, we provide guidance on what state institutions must consider when assessing whether and how to deploy AI and ADM systems for determinations potentially affecting people's rights.
We detailed what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explained why this adoption must meet necessary and proportionate principles, and what this entails. We highlighted what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment. We elaborated on human rights implications building off key rights enshrined in the American Convention on Human Rights and the Protocol of San Salvador, setting up an operational framework for their due application.
Based on the report, we have connected to oversight institutions, joining trainings for public prosecutors in Mexico and strengthening ties with the Public Defender's Office in the state of São Paulo, Brazil. Our goal is to provide inputs for their adequate adoption of AI/ADM systems and for fulfilling their role as public interest entities regarding government use of algorithmic systems more broadly.
Enhancing public oversight of state deployment of rights-affecting technologies in a context of marked government digitalization is essential for democratic policy making and human-rights aligned government action. Civil society also plays a critical role, and we will keep working to raise awareness about potential impacts, pushing for rights to be fortified, not eroded, throughout the way.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Kids Online Safety Act Continues to Threaten Our Rights Online: 2024 in Review
At times this year, it seemed that Congress was going to give up its duty to protect our rights online—particularly when the Senate passed the dangerous Kids Online Safety Act (KOSA) by a large majority in July. But this legislation, which would chill protected speech and almost certainly result in privacy-invasive age verification requirements for many users to access social media sites, did not pass the House this year, thanks to strong opposition from EFF supporters and others.
KOSA, first introduced in 2022, would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to content. Congress introduced a number of versions of the bill this year, and we analyzed each of them. Unfortunately, the threat of this legislation still looms over us as we head into 2025, especially now that the bill has passed the Senate. And just a few weeks ago, its authors introduced an amended version to respond to criticisms from some House members.
Despite its many amendments in 2024, we continue to oppose KOSA. No matter which version becomes final, the bill will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms it identifies.
Here’s how, and why, we worked to stop KOSA this year, and where the fight stands now.
New Versions, Same ProblemsThe biggest problem with KOSA is in its vague “duty of care” requirements. Imposing a duty of care on a broad swath of online services, and requiring them to mitigate specific harms based on the content of online speech, will result in those services imposing age verification and content restrictions. We’ve been critical of KOSA for this reason since it was introduced in 2022.
In February, KOSA's authors in the Senate released an amended version of the bill, in part as a response to criticisms from EFF and other groups. The updates changed how KOSA regulates design elements of online services and removed some enforcement mechanisms, but didn’t significantly change the duty of care, or the bill’s main effects. The updated version of KOSA would still create a censorship regime that would harm a large number of minors who have First Amendment rights to access lawful speech online, and force users of all ages to verify their identities to access that same speech, as we wrote at the time. KOSA’s requirements are comparable to cases in which the government tried to prevent booksellers from disseminating certain books; those attempts were found unconstitutional
Kids Speak OutThe young people who KOSA supporters claim they’re trying to help have spoken up about the bill. In March, we published the results of a survey of young people who gave detailed reasons for their opposition to the bill. Thousands told us how beneficial access to social media platforms has been for them, and why they feared KOSA’s censorship. Too often we’re not hearing from minors in these debates at all—but we should be, because they will be most heavily impacted if KOSA becomes law.
Young people told us that KOSA would negatively impact their artistic education, their ability to find community online, their opportunity for self-discovery, and the ways that they learn accurate news and other information. To sample just a few of the comments: Alan, a fifteen-year old, wrote,
I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for!
More Recent Changes To KOSA Haven’t Made It BetterIn May, the U.S. House introduced a companion version to the Senate bill. This House version modified the bill around the edges, but failed to resolve its fundamental censorship problems. The primary difference in the House version was to create tiers that change how the law would apply to a company, depending on its size.
These are insignificant changes, given that most online speech happens on just a handful of the biggest platforms. Those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care and would be held to the strictest knowledge standard.
The other major shift was to update the definition of “compulsive usage” by suggesting it be linked to the Diagnostic and Statistical Manual of Mental Disorders, or DSM. But simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders.
KOSA Passes the SenateKOSA passed through the Senate in July, though legislators on both sides of the aisle remain critical of the bill.
A version of KOSA introduced in September, tinkered with the bill again but did not change the censorship requirements. This version replaced language about anxiety and depression with a requirement that apps and websites prevent “serious emotional disturbance.”
In December, the Senate released yet another version of the bill—this one written with the assistance of X CEO Linda Yaccarino. This version includes a throwaway line about protecting the viewpoint of users as long as those viewpoints are “protected by the First Amendment to the Constitution of the United States.” But user viewpoints were never threatened by KOSA; rather, the bill has always meant to threaten the hosts of the user speech—and it still does. =
KOSA would allow the FTC to exert control over online speech, and there’s no reason to think the incoming FTC won’t use that power. The nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has promised to protect free speech by “fighting back against the trans agenda,” among other things. KOSA would give the FTC under this or any future administration wide berth to decide what sort of content should be restricted because they view it as harmful to kids. And even if it’s never even enforced, just passing KOSA would likely result in platforms taking down protected speech.
If KOSA passes, we’re also concerned that it would lead to mandatory age verification on apps and websites. Such requirements have their own serious privacy problems; you can read more about our efforts this year to oppose mandatory online ID in the U.S. and internationally.
EFF thanks our supporters, who have sent nearly 50,000 messages to Congress on this topic, for helping us oppose KOSA this year. In 2025, we will continue to rally to protect privacy and free speech online.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
AI and Policing: 2024 in Review
There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.
Enter law enforcement.
When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.
AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.
Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.
We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?
And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.
Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.
Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Fighting Online ID Mandates: 2024 In Review
This year, nearly half of U.S. states passed laws imposing age verification requirements on online platforms. EFF has opposed these efforts, because they censor the internet and burden access to online speech. Though age verification mandates are often touted as “online safety” measures for kids, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security.
Age verification bills generally require online services to verify all users’ ages—often through invasive tools like ID checks, biometric scans, and other dubious “age estimation” methods—before granting them access to certain online content or services. Some state bills mandate the age verification explicitly, including Texas’s H.B. 1181, Florida’s H.B. 3, and Indiana’s S.B. 17. Other state bills claim not to require age verification, but still threaten platforms with liability for showing certain content or features to minor users. These bills—including Mississippi’s H.B. 1126, Ohio’s Parental Notification by Social Media Operators Act, and the federal Kids Online Safety Act—raise the question: how are platforms to know which users are minors without imposing age verification?
EFF’s answer: they can’t. We call these bills “implicit age verification mandates” because, though they might expressly deny requiring age verification, they still force platforms to either impose age verification measures or, worse, to censor whatever content or features deemed “harmful to minors” for all users—not just young people—in order to avoid liability.
Age verification requirements are the wrong approach to protecting young people online. No one should have to hand over their most sensitive personal information or submit to invasive biometric surveillance just to access lawful online speech.
EFF’s Work Opposing State Age Verification BillsLast year, we saw a slew of dangerous social media regulations for young people introduced across the country. This year, the flood of ill-advised bills grew larger. As of December 2024, nearly every U.S. state legislature has introduced at least one age verification bill, and nearly half the states have passed at least one of these proposals into law.
Courts agree with our position on age verification mandates. Across the country, courts have repeatedly and consistently held these so-called “child safety” bills unconstitutional, confirming that it is nearly impossible to impose online age-verification requirements without violating internet users’ First Amendment rights. In 2024, federal district courts in Ohio, Indiana, Utah, and Mississippi enjoined those states’ age verification mandates. The decisions underscore how these laws, in addition to being unconstitutional, are also bad policy. Instead of seeking to censor the internet or block young people from it, lawmakers seeking to help young people should focus on advancing legislation that solves the most pressing privacy and competition problems for all users—without restricting their speech.
Here’s a quick review of EFF’s work this year to fend off state age verification mandates and protect digital rights in the face of this legislative onslaught.
CaliforniaIn January, we submitted public comments opposing an especially vague and poorly written proposal: California Ballot Initiative 23-0035, which would allow plaintiffs to sue online information providers for damages of up to $1 million if they violate their “responsibility of ordinary care and skill to a child.” We pointed out that this initiative’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults, and cause platforms to censor user content and impose mandatory age verification in order to avoid this legal risk. Thankfully, this measure did not make it onto the 2024 ballot.
In February, we filed a friend-of-the-court brief arguing that California’s Age Appropriate Design Code (AADC) violated the First Amendment. Our brief asked the Ninth Circuit Court of Appeals to rule narrowly that the AADC’s age estimation scheme and vague description of “harmful content” renders the entire law unconstitutional, even though the bill also contained several privacy provisions that, stripped of the unconstitutional censorship provisions, could otherwise survive. In its decision in August, the Ninth Circuit confirmed that parts of the AADC likely violate the First Amendment and provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the AADC’s age-verification provision specifically.
Later in the year, we also filed a letter to California lawmakers opposing A.B. 3080, a proposed state bill that would have required internet users to show their ID in order to look at sexually explicit content. Our letter explained that bills that allow politicians to define what “sexually explicit” content is and enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. We declared victory in September when the bill failed to get passed by the legislature.
New YorkSimilarly, after New York passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act earlier this year, we filed comments urging the state attorney general (who is responsible for writing the rules to implement the bill) to recognize that that age verification requirements are incompatible with privacy and free expression rights for everyone. We also noted that none of the many methods of age verification listed in the attorney general’s call for comments is both privacy-protective and entirely accurate, as various experts have reported.
TexasWe also took the fight to Texas, which passed a law requiring all Texas internet users, including adults, to submit to invasive age verification measures on every website deemed by the state to be at least one-third composed of sexual material. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect—creating a split among federal circuit courts on the constitutionality of age verification mandates. In May, we filed an amicus brief urging the U.S. Supreme Court to grant review of the Fifth Circuit’s decision and to ultimately overturn the Texas law on First Amendment grounds.
In September, after the Supreme Court accepted the Texas case, we filed another amicus brief on the merits. We pointed out that the Fifth Circuit’s flawed ruling diverged from decades of legal precedent recognizing, correctly, that online ID mandates impose greater burdens on our First Amendment rights than in-person age checks. We explained that there is nothing about this Texas law or advances in technology that would lessen the harms that online age verification mandates impose on adults wishing to exercise their constitutional rights. The Supreme Court has set this case, Free Speech Coalition v. Paxton, for oral argument in February 2025.
MississippiFinally, we supported the First Amendment challenge to Mississippi’s age verification mandate, H.B. 1126, by filing amicus briefs both in the federal district court and on appeal to the Fifth Circuit. Mississippi’s extraordinarily broad law requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from exposure to materials deemed “harmful” by state officials.
In our June brief for the district court, we once again explained that online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their IDs in physical spaces, and impose significant barriers on adults’ ability to access lawful speech online. The district court agreed with us, issuing a decision that enjoined the Mississippi law and heavily cited our amicus brief.
Upon Mississippi’s appeal to the Fifth Circuit, we filed another amicus brief—this time highlighting H.B. 1126’s dangerous impact on young people’s free expression. After all, minors enjoy the same First Amendment right as adults to access and engage in protected speech online, and online spaces are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics—and seek critical resources and support for the very same harms these bills claim to address. In our brief, we urged the court to recognize that age-verification regimes like Mississippi’s place unnecessary and unconstitutional barriers between young people and these online spaces that they rely on for vibrant self-expression and crucial support.
Looking AheadAs 2024 comes to a close, the fight against online age verification is far from over. As the state laws continue to proliferate, so too do the legal challenges—several of which are already on file.
EFF’s work continues, too. As we move forward in state legislatures and courts, at the federal level here in the United States, and all over the world, we will continue to advocate for policies that protect the free speech, privacy, and security of all users—adults and young people alike. And, with your help, we will continue to fight for the future of the open internet, ensuring that all users—especially the youth—can access the digital world without fear of surveillance or unnecessary restrictions.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Federal Regulators Limit Location Brokers from Selling Your Whereabouts: 2024 in Review
The opening and closing months of 2024 saw federal enforcement against a number of location data brokers that track and sell users’ whereabouts through apps installed on their smartphones. In January, the Federal Trade Commission brought successful enforcement actions against X-Mode Social and InMarket, banning the companies from selling precise location data—a first prohibition of this kind for the FTC. And in December, the FTC widened its net to two additional companies—Gravy Analytics (Venntel) and Mobilewalla—barring them from selling or disclosing location data on users visiting sensitive areas such as reproductive health clinics or places of worship. In previous years, the FTC has sued location brokers such as Kochava, but the invasive practices of these companies have only gotten worse. Seeing the federal government ramp up enforcement is a welcome development for 2024.
As regulators have clearly stated, location information is sensitive personal information. Companies can glean location information from your smartphone in a number of ways. Apps that include Software Development Kits (SDKs) from some companies will instruct the app to send back troves of sensitive information for analytical insights or debugging purposes. The data brokers may offer market insights or financial incentives for app developers to include their SDKs. Other companies will not ask apps to directly include their SDKs, but will participate in Real-Time Bidding (RTB) auctions, placing bids for ad-space on devices in locations they specify. Even if they lose the auction, they can glean valuable device location information just by participating. Often, apps will ask for permissions such as location data for legitimate reasons aligned with the purpose of the app: for example, a price comparison app might use your whereabouts to show you the cheapest vendor of a product you’re interested in for your area. What you aren’t told is that your location is also shared with companies tracking you.
A number of revelations this year gave us better insight into how the location data broker industry works, revealing the inner-workings of powerful tools such as Locate X, which allows even those claiming to work with law enforcement at some point in the future to access troves of mobile location data across the planet. The mobile location tracking company FOG Data Science, which in 2022 EFF revealed to be selling troves of information to local police, was this year found also to be soliciting law enforcement for information on the doctors of suspects in order to track them via their doctor visits.
A number of revelations this year gave us better insight into how the location data broker industry works
EFF detailed how these tools can be stymied via technical means, such as changing a few key settings on your mobile device to disallow data brokers from linking your location across space and time. We further outlined legislative avenues to ensure structural safeguards are put in place to protect us all from an out-of-control predatory data industry.
In addition to FTC action, the Consumer Financial Protection Bureau proposed a new rule meant to crack down on the data broker industry. As the CFPB mentioned, data brokers compile highly sensitive information—like information about a consumer's finances, the apps they use, and their location throughout the day. The rule would include stronger consent requirements and protections for personal data that has been purportedly de-identified. Given the abuses the announcement cites, including the distribution and sale of “detailed personal information about military service members, veterans, government employees, and other Americans,” we hope to see adoption and enforcement of this proposed rule in 2025.
This year has seen a strong regulatory appetite to protect consumers from harms which in bygone years would have seemed unimaginable: detailed records on the movements of nearly everyone, packaged and made available for pennies. We hope 2025 continues this appetite to address the dangers of location data brokers.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Exposing Surveillance at the U.S.-Mexico Border: 2024 Year in Review in Pictures
Some of the most picturesque landscapes in the United States can be found along the border with Mexico. Yet, from San Diego’s beaches to the Sonoran Desert, from Big Bend National Park to the Boca Chica wetlands, we see vistas marred by the sinister spread of surveillance technology, courtesy of the federal government.
EFF refuses to let this blight grow without documenting it, exposing it, and finding ways to fight back alongside the communities that live in the shadow of this technological threat to human rights.
Here’s a galley of images representing our work and the new developments we’ve discovered in border surveillance in 2024.
1. Mapping Border SurveillanceEFF’s stand-up display of surveillance at the US-Mexico border. Source: EFF
EFF published the first iteration of our map of surveillance towers at the U.S.-Mexico border in Spring 2023, having pinpointed the precise location of 290 towers, a fraction of what we knew might be out there. A year- –and- –a -half later, with the help of local residents, researchers, and search-and-rescue groups, our map now includes more than 500 towers.
In many cases, the towers are brand new, with some going up as recently as this fall. We’ve also added the location of surveillance aerostats, checkpoint license plate readers, and face recognition at land ports of entry.
In addition to our online map, we also created a 10’ x 7’ display that we debuted at “Regardless of Frontiers: The First Amendment and the Exchange of Ideas Across Borders,” a symposium held by the Knight First Amendment Institute at Columbia University in October. If your institution would be interested in hosting it, please email us at aos@eff.org.
2. Infrastructures of ControlThe Infrastructures of Control exhibit at University of Arizona. Source: EFF
Two University of Arizona geographers—Colter Thomas and Dugan Meyer—used our map to explore the border, driving on dirt roads and hiking in the desert, to document the infrastructure that comprises the so-called “virtual wall.” The result: “Infrastructures of Control,” a photography exhibit in April at the University of Arizona that also included a near-actual size replica of an “autonomous surveillance tower.”
You can read our interview with Thomas and Meyer here.
3. An Old Tower, a New Lease in CalexicoA remote video surveillance system in Calexico, Calif. Source: EFF
Way back in 2000, the Immigration and Naturalization Service—which oversaw border security prior to the creation of Customs and Border Protection (CBP) within the Department of Homeland Security (DHS) — leased a small square of land in a public park in Calexico, Calif., where it then installed one of the earliest border surveillance towers. The lease lapsed in 2020 and with plans for a massive surveillance upgrade looming, CBP rushed to try to renew the lease this year.
This was especially concerning because of CBP’s new strategy of combining artificial intelligence with border camera feeds. So EFF teamed up with the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition to try to convince the Calexico City Council to either reject the lease or demand that CBP enact better privacy protections for residents in the neighboring community and children playing in Nosotros Park. Unfortunately, local politics were not in our favor. However, resisting border surveillance is a long game, and EFF considers it a victory that this tower even got a public debate at all.
4. Aerostats Up in the AirThe Tactical Aerostat System at Santa Teresa Station. Source: Battalion Search and Rescue (CC BY)
CBP seems incapable of developing a coherent strategy when it comes to tactical aerostats—tethered blimps equipped with long-range, high-definition cameras. In 2021, the agency said it wanted to cancel the program, which involved four aerostats in the Rio Grande Valley, before reversing itself. Then in 2022, CBP launched new aerostats in Nogales, Ariz. and Columbus, N.M. and announced plans to launch 17 more within a year.
But by 2023, CBP had left the program out of its proposed budget, saying the aerostats would be decommissioned.
And yet, in fall 2024, CBP launched a new aerostat at the Santa Teresa Border Patrol Station in New Mexico. Our friends at Battalion Search & Rescue gathered photo evidence for us. Soon after, CBP issued a new solicitation for the aerostat program and a member of Congress told Border Report that the aerostats may be upgraded and as many as 12 new ones may be acquired by CBP via the Department of Defense.
Meanwhile, one of CBP’s larger Tethered Aerostats Radar Systems in Eagle Pass, Texas was down for most of the year after deflating in high winds. CBP has reportedly not been interested in paying hundreds of thousands of dollars to get it up again.
5. New Surveillance in Southern ArizonaA Buckeye Camera on a pole along the border fence near Sasabe, Ariz. Source: EFF
Buckeye Cameras are motion-triggered cameras that were originally designed for hunters and ranchers to spot wildlife, but border enforcement authorities—both federal and state/local—realized years ago that they could be used to photograph people crossing the border. These cameras are often camouflaged (e.g. hidden in trees, disguised as garbage, or coated in sand).
Now, CBP is expanding their use of Buckeye Cameras. During a trip to Sasabe, Ariz., we discovered the CBP is now placing Buckeye Cameras in checkpoints, welding them to the border fence, and installing metal poles, wrapped in concertina wire, with Buckeye Cameras at the top.
A surveillance tower along the highway west of Tucson. Source: EFF
On that same trip to Southern Arizona, EFF (along with the Infrastructures of Control geographers) passed through a checkpoint west of Tucson, where previously we had identified a relocatable surveillance tower. But this time it was gone. Why, we wondered? Our question was answered just a minute or two later, when we spotted a new surveillance tower on a nearby hill-top, a new model that we had not previously seen deployed in the wild.
6. Artificial IntelligenceA graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection
CBP and other agencies regularly hold “Industry Days” to brief contractors on the new technology and capabilities the agency may want to buy in the near future. In January, EFF attended one such “Industry Day” designed to bring tech vendors up-to-speed on the government’s horrific vision of a border secured by artificial intelligence (see the graphic above for an example of that vision).
A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection
At this event, CBP released the convoluted flow chart above as part of slide show. Since it’s so difficult to parse, here’s the best sense we can make out of it:. When someone crosses the border, it triggers an unattended ground sensor (UGS), and then a camera autonomously detects, identifies, classifies and tracks the person, handing them off camera to camera, and the AI system eventually alerts Border Patrol to dispatch someone to intercept them for detention.
7. Congress in Virtual RealityRep. Scott Peters on our VR tour of the border. Source: Peters’ Instagram
We search for surveillance on the ground. We search for it in public records. We search for it in satellite imagery. But we’ve also learned we can use virtual reality in combination with Google Streetview not only to investigate surveillance, but also to introduce policymakers to the realities of the policies they pass. This year, we gave Rep. Scott Peters (D-San Diego) and his team a tour of surveillance at the border in VR, highlighting the impact on communities.
“[EFF] reminded me of the importance of considering cost-effectiveness and Americans’ privacy rights,” Peters wrote afterward in a social media.
We also took members of Rep. Mark Amodei’s (R-Reno) district staff on a similar tour. Other Congressional staffers should contact us at aos@eff.org if you’d like to try it out.
Learn more about how EFF uses VR to research the border in this interview and this lightning talk.
8. Indexing Border Tech CompaniesAn HDT Global vehicle at the 2024 Border Security Expo. Source: Dugan Meyer (CC0 1.0 Universal)
In partnership with the Heinrich Böll Foundation, EFF and University of Nevada, Reno student journalist Andrew Zuker built a dataset of hundreds of vendors marketing technology to the U.S. Department of Homeland Security. As part of this research, Zuker journeyed to El Paso, Texas for the Border Security Expo, where he systematically gathered information from all the companies promoting their surveillance tools. You can read Zuker’s firsthand report here.
9. Plataforma Centinela Inches SkywardAn Escorpión unit, part of the state of Chihuahua’s Plataforma Centinela project. Source: EFF
In fall 2023, EFF released its report on the Plataforma Centinela, a massive surveillance network being built by the Mexican state of Chihuahua in Ciudad Juarez that will include 10,000+ cameras, face recognition, artificial intelligence, and tablets that police can use to access all this data from the field. At its center is the Torre Centinela, a 20-story headquarters that was supposed to be completed in 2024.
The site of the Torre Centinela in downtown Ciudad Juarez. Source: EFF
We visited Ciudad Juarez in May 2024 and saw that indeed, new cameras had been installed along roadways, and the government had begun using “Escorpión” mobile surveillance units, but the tower was far from being completed. A reporter who visited in November confirmed that not much more progress had been made, although officials claim that the system will be fully operational in 2025.
10. EFF’s Border Surveillance Zine
Do you want to review even more photos of surveillance that can be found at the border, whether they’re planted in the ground, installed by the side of the road, or floating in the air? Download EFF’s new zine in English or Spanish—or if you live a work in the border region, email us as aos@eff.org and we’ll mail you hard copies.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Fighting Automated Oppression: 2024 in Review
EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.
This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult.
If you train it on a biased dataset, you are creating a technology to automate systemic, historical injustice.
It’s important to note that decision makers tend to defer to ADMs or use them as cover to justify their own biases. And even though they are implemented to change how decisions are made by government officials, the adoption of an ADM is often considered a mere ‘procurement’ decision like buying a new printer, without the kind of public involvement that a rule change would ordinarily entail. This, of course, increases the likelihood that vulnerable members of the public will be harmed and that technologies will be adopted without meaningful vetting. While there may be positive use cases for machine learning to analyze government processes and phenomena in the world, making decisions about people is one of the worst applications of this technology, one that entrenches existing injustice and creates new, hard-to-discover errors that can ruin lives.
Vendors of ADM have been riding a wave of AI hype, and police, border authorities, and spy agencies have gleefully thrown taxpayer money at products that make it harder to hold them accountable while being unproven at offering any other ‘benefit.’ We’ve written about the use of generative AI to write police reports based on the audio from bodycam footage, flagged how national security use of AI is a threat to transparency, and called for an end to AI Use in Immigration Decisions.
The hype around AI and the allure of ADMs has further incentivized the collection of more and more user data.
The private sector is also deploying ADM to make decisions about people’s access to employment, housing, medicine, and more. People have an intuitive understanding of some of the risks this poses, with most Americans expressing discomfort about the use of AI in these contexts. Companies can make a quick buck firing people and demanding the remaining workers figure out how to implement snake-oil ADM tools to make these decisions faster, though it’s becoming increasingly clear that this isn’t delivering the promised productivity gains.
ADM can, however, help a company avoid being caught making discriminatory decisions that violate civil rights laws—one reason why we support mechanisms to prevent unlawful private discrimination using ADM. Finally, the hype around AI and the allure of ADMs has further incentivized the collection and monetization of more and more user data and more invasions of privacy online, part of why we continue to push for a privacy-first approach to many of the harmful applications of these technologies.
In EFF’s podcast episode on AI, we discussed some of the challenges posed by AI and some of the positive applications this technology can have when it’s not used at the expense of people’s human rights, well-being, and the environment. Unless something dramatically changes, though, using AI to make decisions about human beings is unfortunately doing a lot more harm than good.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
State Legislatures Are The Frontline for Tech Policy: 2024 in Review
State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in Congress, but also in statehouses across the country.
This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.
Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights in California, you can check out our recap of this year’s session.
EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look forward to continuing the fight to push it across the finish line in 2025.
As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.
States Continue to ExperimentSeveral states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead introduces considerable censorship and privacy concerns.
For example, in California, we were able to stop A.B. 3080, authored by Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.
We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in more than a dozen states. As we said in comments to the New York Attorney General’s office on their recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.
We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction) on First Amendment grounds. We encourage lawmakers to explore ways to focus on the harms that deepfakes pose without endangering speech rights.
On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both houses, it was vetoed by Vermont Gov. Phil Scott. As private rights of action are among our top priorities in privacy laws, we look forward to seeing more bills this year that contain this important enforcement measure.
Looking Ahead to 20252025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.
As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in 2024.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EFF’s 2023 Annual Report Highlights a Year of Victories: 2024 in Review
Every fall, EFF releases its annual report, and 2023 was the year of Privacy First. Our annual report dives into our groundbreaking whitepaper along with victories in freeing the law, right to repair, and more. It’s a great, easy-to-read summary of the year’s work, and it contains interesting tidbits about the impact we’ve made—for instance, did you know 394,000 people downloaded an episode of EFF’s Podcast, “How to Fix the Internet as of 2023?” Or that EFF had donors in 88 countries?
As you can see in the report, EFF’s role as the oldest, largest, and most trusted digital rights organization became even more important when tech law and policy commanded the public’s attention in 2023. Major headlines pondered the future of internet freedom. Arguments around free speech, digital privacy, AI, and social media dominated Congress, state legislatures, the U.S. Supreme Court, and the European Union.
EFF intervened with logic and leadership to keep bad ideas from getting traction, and we articulated solutions to legitimate concerns with care and nuance in our whitepaper, Privacy First: A Better Way to Protect Against Online Harms. It demonstrated how seemingly disparate concerns are in fact linked to the dominance of tech giants and the surveillance business models used by most of them. We noted how these business models also feed law enforcement’s increasing hunger for our data. We pushed for a comprehensive approach to privacy instead and showed how this would protect us all more effectively than harmful censorship strategies.
The longest running fight we won in 2023 was to free the law: In our legal representation of PublicResource.org, we successfully ensured that copyright law does not block you from finding, reading and sharing laws, regulations and building codes online. We also won a major victory in helping to pass a law in California to increase tech users’ ability to control their information. In states across the nation, we helped boost the right to repair. Due to the efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 year, as much as 95% of the web is now encrypted. And that’s just barely scratching the surface.
Obviously, we couldn’t do any of this without the support of our members, large and small. Thank you. Take a look at the report for more information about the work we’ve been able to do this year thanks to your help.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Aerial and Drone Surveillance: 2024 in Review
We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.
Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced.
This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that.
Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.
Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public.
Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies.
Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons.
As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review
This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.
Internet shutdownsIt is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.
But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.
Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.
In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.
Restrictions on content, age, and identityAnother alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.
While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.
One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.
CybercrimeWe’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.
EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Cars (and Drivers): 2024 Year in Review
If you’ve purchased a car made in the last decade or so, it’s likely jam-packed with enough technology to make your brand new phone jealous. Modern cars have sensors, cameras, GPS for location tracking, and more, all collecting data—and it turns out in many cases, sharing it.
Cars Sure Are Sharing a Lot of InformationWhile we’ve been keeping an eye on the evolving state of car privacy for years, everything really took off after a New York Times report this past March found that the car maker G.M. was sharing information about driver’s habits with insurance companies without consent.
It turned out a number of other car companies were doing the same by using deceptive design so people didn’t always realize they were opting into the program. We walked through how to see for yourself what data your car collects and shares. That said, cars, infotainment systems, and car maker’s apps are so unstandardized it’s often very difficult for drivers to research, let alone opt out of data sharing.
Which is why we were happy to see Senators Ron Wyden and Edward Markey send a letter to the Federal Trade Commision urging it to investigate these practices. The fact is: car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom.
Advocating for Better Bills to Protect Abuse SurvivorsThe amount of data modern cars collect is a serious privacy concern for all of us. But for people in an abusive relationship, tracking can be a nightmare.
This year, California considered three bills intended to help domestic abuse survivors endangered by vehicle tracking. Of those, we initially liked the approach behind two of them, S.B. 1394 and S.B. 1000. When introduced, both would have served the needs of survivors in a wide range of scenarios without inadvertently creating new avenues of stalking and harassment for the abuser to exploit. They both required car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor had to prove the vehicle was theirs to use, even if their name was not on the loan or title.
But the third bill, A.B. 3139, took a different approach. Rather than have people submit requests first and cut access later, this bill required car manufacturers to terminate access immediately, and only require some follow-up documentation up to seven days later. Likewise, S.B. 1394 and S.B. 1000 were amended to adopt this "act first, ask questions later" framework. This approach is helpful for survivors in one scenario—a survivor who has no documentation of their abuse, and who needs to get away immediately in a car owned by their abuser. Unfortunately, this approach also opens up many new avenues of stalking, harassment, and abuse for survivors. These bills ended up being combined into S.B. 1394, which retained some provisions we remain concerned about.
It’s Not Just the Car ItselfBecause of everything else that comes with car ownership, a car is just one piece of the mobile privacy puzzle.
This year we fought against A.B. 3138 in California, which proposed adding GPS technology to digital license plates to make them easier to track. The bill passed, unfortunately, but location data privacy continues to be an important issue that we’ll fight for.
We wrote about a bulletin released by the U.S. Cybersecurity and Infrastructure Security Agency about infosec risks in one brand of automated license plate readers (ALPRs). Specifically, the bulletin outlined seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials. The sheer scale of this vulnerability is alarming: EFF found that just 80 agencies in California, using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates.
Finally, in order to drive a car, you need a license, and increasingly states are offering digital IDs. We dug deep into California’s mobile ID app, wrote about the various issues with mobile IDs— which range from equity to privacy problems—and put together an FAQ to help you decide if you’d even benefit from setting up a mobile ID if your state offers one. Digital IDs are a major concern for us in the coming years, both due to the unanswered questions about their privacy and security, and their potential use for government-mandated age verification on the internet.
The privacy problems of cars are of increasing importance, which is why Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. While we tend to think of data privacy laws as dealing with computers, phones, or IoT devices, they’re just as applicable, and increasingly necessary, for cars, too.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.