EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 6 hours 50 min ago

Craig Newmark Philanthropies – Celebrating 30 Years of Support for Digital Rights

Mon, 01/08/2024 - 7:16pm

EFF has been awarded a new $200,000 grant from Craig Newmark Philanthropies to strengthen our cybersecurity work in 2024. We are especially grateful this year, as it marks 30 years of donations from Craig Newmark, who joined as an EFF member just three years after our founding and four years before he launched the popular website craigslist.  

Over the past several years, grants from Craig Newmark Philanthropies have focused on supporting trustworthy journalism to defend our democracy and hold the powerful accountable, as well as cybersecurity to protect consumers and journalists alike from malware and other dangers online. With this funding, EFF has built networks to help defend against disinformation warfare, fought online harassment, strengthened ethical journalism, and researched state-sponsored malware, cyber-mercenaries, and consumer spyware. EFF’s Threat Lab conducts research on surveillance technologies used to target journalists, communities, activists, and individuals. For example, we helped co-found, and continue to provide leadership to the Coalition Against Stalkerware. EFF also created and updated tools to educate and train working and student journalists alike to keep themselves safe from adversarial attacks. In addition to maintaining our popular Surveillance Self Defense guide, we scaled up our Report Back tool for student journalists, cybersecurity students, and grassroots volunteers to collaboratively study technology in society. 

In 2006, EFF recognized craigslist for cultivating a pervasive culture of trust and maintaining its public service charge even as it became one of the most popular websites in the world. Though Craig has retired from craigslist, this ethos continues through his philanthropic giving, which is “focused on a commitment to fairness and doing right by others.” EFF thanks Craig Newmark for his 30 years of financial support, which has helped us grow to become the leading nonprofit defending digital privacy, free speech, and innovation today. 

EFF Urges Pennsylvania Supreme Court to Find Keyword Search Warrant Unconstitutional

Fri, 01/05/2024 - 2:21pm
These Dragnet Searches Violate the Privacy of Millions of Americans

SAN FRANCISCO—Keyword warrants that let police indiscriminately sift through search engine databases are unconstitutional dragnets that target free speech, lack particularity and probable cause, and violate the privacy of countless innocent people, the Electronic Frontier Foundation (EFF) and other organizations argued in a brief filed today to the Supreme Court of Pennsylvania. 

Everyone deserves to search online without police looking over their shoulder, yet millions of innocent Americans’ privacy rights are at risk in Commonwealth v. Kurtz—only the second case of its kind to reach a state’s highest court. The brief filed by EFF, the National Association of Criminal Defense Lawyers (NACDL), and the Pennsylvania Association of Criminal Defense Lawyers (PACDL) challenges the constitutionality of a keyword search warrant issued by the police to Google. The case involves a massive invasion of Google users’ privacy, and unless the lower court’s ruling is overturned, it could be applied to any user using any search engine. 

“Keyword search warrants are totally incompatible with constitutional protections for privacy and freedom of speech and expression,” said EFF Surveillance Litigation Director Andrew Crocker. “All keyword warrants—which target our speech when we seek information on a search engine—have the potential to implicate innocent people who just happen to be searching for something an officer believes is somehow linked to a crime. Dragnet warrants that target speech simply have no place in a democracy.” 

Users have come to rely on search engines to routinely seek answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant. Google keeps detailed information on every search query it receives, however, resulting in a vast record of users’ most private and personal thoughts, opinions, and associations that police seek to access by merely demanding the identities of all users who searched for specific keywords. 

Because this data is so broad and detailed, keyword search warrants are especially concerning: Unlike typical warrants for electronic information, these do not target specific people or accounts. Instead, they require a provider to search its entire reserve of user data to identify any and all users or devices who searched for words or phrases specified by police. As in this case, the police generally have no identified suspects when they seek such a warrant; instead, the sole basis is the officer’s hunch that the perpetrator might have searched for something related to the crime.  

This violates the Pennsylvania Constitution’s Article I, Section 8 and the Fourth Amendment to the U.S. Constitution, EFF’s brief argued, both of which were inspired by 18th-century writs of assistance—general warrants that let police conduct exploratory rummaging through a person’s belongings. These keyword search warrants also are especially harmful because they target protected speech and the related right to receive information, the brief argued. 

"Keyword search warrants are digital dragnets giving the government permission to rummage through our most private information, and the Pennsylvania Supreme Court should find them unconstitutional,” said NACDL Fourth Amendment Center Litigation Director Michael Price. 

“Search engines are an indispensable tool for finding information on the Internet, and the ability to use them—and use them anonymously—is critical to a free society,” said Crocker. “If providers can be forced to disclose users’ search queries in response to a dragnet warrant, it will chill users from seeking out information about anything that police officers might conceivably choose as a searchable keyword.” 

For the brief: https://www.eff.org/document/commonwealth-v-kurtz-amicus-brief-pennsylvania-supreme-court-1-5-2024

For a similar case in Colorado: https://www.eff.org/deeplinks/2023/10/colorado-supreme-court-upholds-keyword-search-warrant 

Contact:  AndrewCrockerSurveillance Litigation Directorandrew@eff.org

AI Watermarking Won't Curb Disinformation

Fri, 01/05/2024 - 1:46pm

Generative AI allows people to produce piles upon piles of images and words very quickly. It would be nice if there were some way to reliably distinguish AI-generated content from human-generated content. It would help people avoid endlessly arguing with bots online, or believing what a fake image purports to show. One common proposal is that big companies should incorporate watermarks into the outputs of their AIs. For instance, this could involve taking an image and subtly changing many pixels in a way that’s undetectable to the eye but detectable to a computer program. Or it could involve swapping words for synonyms in a predictable way so that the meaning is unchanged, but a program could readily determine the text was generated by an AI.

Unfortunately, watermarking schemes are unlikely to work. So far most have proven easy to remove, and it’s likely that future schemes will have similar problems.

One kind of watermark is already common for digital images. Stock image sites often overlay text on an image that renders it mostly useless for publication. This kind of watermark is visible and is slightly challenging to remove since it requires some photo editing skills.

anemone-occidentalis-watermarked.jpg

Images can also have metadata attached by a camera or image processing program, including information like the date, time, and location a photograph was taken, the camera settings, or the creator of an image. This metadata is unobtrusive but can be readily viewed with common programs. It’s also easily removed from a file. For instance, social media sites often automatically remove metadata when people upload images, both to prevent people from accidentally revealing their location and simply to save storage space.

A useful watermark for AI images would need two properties: 

  • It would need to continue to be detectable after an image is cropped, rotated, or edited in various ways (robustness). 
  • It couldn’t be conspicuous like the watermark on stock image samples, because the resulting images wouldn’t be of much use to anybody.

One simple technique is to manipulate the least perceptible bits of an image. For instance, to a human viewer these two squares are the same shade:

two green boxes

But to a computer it’s obvious that they are different by a single bit: #93c47d vs 93c57d. Each pixel of an image is represented by a certain number of bits, and some of them make more of a perceptual difference than others. By manipulating those least-important bits, a watermarking program can create a pattern that viewers won’t see, but a watermarking-detecting program will. If that pattern repeats across the whole image, the watermark is even robust to cropping. However, this method has one clear flaw: rotating or resizing the image is likely to accidentally destroy the watermark.

There are more sophisticated watermarking proposals that are robust to a wider variety of common edits. However, proposals for AI watermarking must pass a tougher challenge. They must be robust against someone who knows about the watermark and wants to eliminate it. The person who wants to remove a watermark isn’t limited to common edits, but can directly manipulate the image file. For instance, if a watermark is encoded in the least important bits of an image, someone could remove it by simply setting all the least important bits to 0, or to a random value (1 or 0), or to a value automatically predicted based on neighboring pixels. Just like adding a watermark, removing a watermark this way gives an image that looks basically identical to the original, at least to a human eye.

Coming at the problem from the opposite direction, some companies are working on ways to prove that an image came from a camera (“content authenticity”). Rather than marking AI generated images, they add metadata to camera-generated images, and use cryptographic signatures to prove the metadata is genuine. This approach is more workable than watermarking AI generated images, since there’s no incentive to remove the mark. In fact, there’s the opposite incentive: publishers would want to keep this metadata around because it helps establish that their images are “real.” But it’s still a fiendishly complicated scheme, since the chain of verifiability has to be preserved through all software used to edit photos. And most cameras will never produce this metadata, meaning that its absence can’t be used to prove a photograph is fake.

Comparing watermarking vs content authenticity, watermarking aims to identify or mark (some) fake images; content authenticity aims to identify or mark (some) real images. Neither approach is comprehensive, since most of the images on the Internet will have neither a watermark nor content authenticity metadata.

Watermarking Content authenticity AI images Marked Unmarked (Some) camera images Unmarked Marked Everything else Unmarked Unmarked

 

Text-based Watermarks

The watermarking problem is even harder for text-based generative AI. Similar techniques can be devised. For instance, an AI could boost the probability of certain words, giving itself a subtle textual style that would go unnoticed most of the time, but could be recognized by a program with access to the list of words. This would effectively be a computer version of determining the authorship of the twelve disputed essays in The Federalist Papers by analyzing Madison’s and Hamilton’s habitual word choices.

But creating an indelible textual watermark is a much harder task than telling Hamilton from Madison, since the watermark must be robust to someone modifying the text trying to remove it. Any watermark based on word choice is likely to be defeated by some amount of rewording. That rewording could even be performed by an alternate AI, perhaps one that is less sophisticated than the one that generated the original text, but not subject to a watermarking requirement.

There’s also a problem of whether the tools to detect watermarked text are publicly available or are secret. Making detection tools publicly available gives an advantage to those who want to remove watermarking, because they can repeatedly edit their text or image until the detection tool gives an all clear. But keeping them a secret makes them dramatically less useful, because every detection request must be sent to whatever company produced the watermarking. That would potentially require people to share private communication if they wanted to check for a watermark. And it would hinder attempts by social media companies to automatically label AI-generated content at scale, since they’d have to run every post past the big AI companies.

Since text output from current AIs isn’t watermarked, services like GPTZero and TurnItIn have popped up, claiming to be able to detect AI-generated content anyhow. These detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism.

Lastly, if AI watermarking is to prevent disinformation campaigns sponsored by states, it’s important to keep in mind that those states can readily develop modern generative AI, and probably will in the near future. A state-sponsored disinformation campaign is unlikely to be so polite as to watermark its output.

Watermarking of AI generated content is an easy-sounding fix for the thorny problem of disinformation. And watermarks may be useful in understanding reshared content where there is no deceptive intent. But research into adversarial watermarking for AI is just beginning, and while there’s no strong reason to believe it will succeed, there are some good reasons to believe it will ultimately fail.

EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers’ Privacy and Free Expression

Thu, 01/04/2024 - 1:41pm

As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone’s viewing history, EFF argued in court last month.

For decades, the Video Privacy Protection Act (VPPA) has safeguarded people’s viewing habits by generally requiring services that offer videos to the public to get their customers’ written consent before disclosing that information to the government or a private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users’ viewing habits, too.

The VPPA, however, is under attack by Patreon. That service for content creators and viewers is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook.

Patreon argues that even if it did violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth. This doctrine asks whether a substantial number of the challenged law’s applications violate the First Amendment, judged in relation to the law’s plainly legitimate sweep.  Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of the Communications Decency Act’s (CDA) online speech restrictions because it placed an “unacceptably heavy burden on protected speech.”

EFF is second to none in fighting for everyone’s First Amendment rights in court, including internet users (in Reno mentioned above) and the companies that host our speech online. But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who benefit from the VPPA’s protections.

As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon’s argument is wrong because the VPPA directly advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance.

“The VPPA provides Americans with critical, private space to view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion,” we wrote. “That breathing room is often a catalyst for people’s free expression.”

As the brief recounts, courts have protected against government efforts to learn people’s book buying and library history, and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people’s ability to privately consume media advances the First Amendment’s purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people’s video viewing habits are intensely private, because the data can reveal intimate details about our personalities, politics, religious beliefs, and values.

Patreon’s First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, “[t]he VPPA’s purpose, application, and enforcement is overwhelmingly focused on regulating the disclosure of a person’s video viewing history in the course of a commercial transaction between the provider and user.” In other words, the legitimate sweep of the VPPA does not violate the First Amendment because generally there is no public interest in disclosing any one person’s video viewing habits that a company learns purely because it is in the business of selling video access to the public.

There is a better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF’s brief explains, should a video provider face liability under the VPPA for disclosing a customer’s video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern.

Indeed, courts have recognized that certain applications of privacy laws, such as the Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them entirely.

“In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course,” EFF wrote. “This approach accommodates the broad and legitimate sweep of those privacy protections while vindicating speakers’ First Amendment rights.”

Patreon's argument would see the VPPA gutted—an enormous loss for privacy and free expression for the public. The court should protect against the disclosure of everyone’s viewing history and protect the VPPA.

You can read our brief here.

Victory! Police Drone Footage is Not Categorically Exempt From California’s Public Records Law

Wed, 01/03/2024 - 1:20pm

Video footage captured by police drones sent in response to 911 calls cannot be kept entirely secret from the public, a California appellate court ruled last week.

The decision by the California Court of Appeal for the Fourth District came after a journalist sought access to videos created by Chula Vista Police Department’s “Drones as First Responders” (DFR) program. The police department is the first law enforcement agency in the country to use drones to respond to emergency calls, and several other agencies across the U.S. have since adopted similar models.

After the journalist, Arturo Castañares of La Prensa, sued, the trial court ruled that Chula Vista police could withhold all footage because the videos were exempt from disclosure as law enforcement investigatory records under the California Public Records Act. Castañares appealed.

EFF, along with the First Amendment Coalition and the Reporters Committee for Freedom of the Press, filed a friend-of-the-court brief in support of Castañares, arguing that categorically excluding all drone footage from public disclosure could have troubling consequences on the public’s ability to understand and oversee the police drone program.

Drones, also called unmanned aerial vehicles (UAVs) or unmanned aerial systems (UAS), are relatively inexpensive devices that police use to remotely surveil areas. Historically, law enforcement have used small systems, such as quadrotors, for situational awareness during emergency situations, for capturing crime scene footage, or for monitoring public gatherings, such as parades and protests. DFR programs represent a fundamental change in strategy, with police responding to a much, much larger number of situations with drones, resulting in pervasive, if not persistent surveillance of communities.

Because drones raise distinct privacy and free expression concerns, foreclosing public access to their footage would make it difficult to assess whether police are following their own rules about when and whether they record sensitive places, such as people’s homes or public protests.

The appellate court agreed that drone footage is not categorically exempt from public disclosure. In reversing the trial court’s decision, the California Court of Appeal ruled that although some 911 calls are likely part of law enforcement investigation or at least are used to determine whether a crime occurred, not all 911 calls involve crimes.

“For example, a 911 call about a mountain lion roaming a neighborhood, a water leak, or a stranded motorist on the freeway could warrant the use of a drone but do not suggest a crime might have been committed or is in the process of being committed,” the court wrote.

Because it’s possible that some of Chula Vista’s drone footage involves scenarios in which no crime is committed or suspected, the police department cannot categorically withhold every moment of video footage from the public.

The appellate court sent the case back to the trial court and ordered it and the police department to take a more nuanced approach to determine whether the underlying call for service was a crime or was an initial investigation into a potential crime.

“The drone video footage should not be treated as a monolith, but rather, it can be divided into separate parts corresponding to each specific call,” the court wrote. “Then each distinct video can be evaluated under the CPRA in relation to the call triggering the drone dispatch.”

This victory sends a message to other agencies in California adopting copycat programs, such as the Beverly Hills Police Department, Irvine Police Department, and Fremont Police Department, that they can’t abuse public records laws to shield every second of drone footage from public scrutiny.

Digital Rights for LGBTQ+ People: 2023 Year in Review

Mon, 01/01/2024 - 8:16am

An increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and offline across the globe. Throughout 2023, several countries sought to pass explicitly anti-LGBTQ+ initiatives restricting freedom of expression and privacy. This fuels offline intolerance against LGBTQ+ people, and forces them to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One growing threat to LGBTQ+ people is data surveillance. Across the U.S., a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. For example, the Texas Attorney General is investigating a hospital for providing gender-affirming health care to transgender youths. We can expect anti-trans investigators to use the tactics of anti-abortion investigators, including seizure of internet browsing and private messaging

It is imperative that businesses are prevented from collecting and retaining this data in the first place, so that it cannot later be seized by police and used as evidence. Legislators should start with Rep. Jacobs’ My Body, My Data bill. We also need new laws to ban reverse warrants, which police can use to identify every person who searched for the keywords “how do I get gender-affirming care,” or who was physically located near a trans health clinic. 

Moreover, LGBTQ+ expression was targeted by U.S. student monitoring tools like GoGuardian, Gaggle, and Bark. The tools scan web pages and documents in students’ cloud drives for keywords about topics like sex and drugs, which are subsequently blocked or flagged for review by school administrators. Numerous reports show regular flagging of LGBTQ+ content. This creates a harmful atmosphere for students; for example, some have been outed because of it. In a positive move, Gaggle recently removed LGBTQ+ terms from their keyword list and GoGuardian has done the same. But, LGBTQ+ resources are still commonly flagged for containing words like "sex," "breasts," or "vagina." Student monitoring tools must remove all terms from their blocking and flagging lists that trigger scrutiny and erasure of sexual and gender identity. 

Looking outside the U.S., LGBTQ+ rights were gravely threatened by expansive cybercrime and surveillance legislation in the Middle East and North Africa throughout 2023. For example, the Cybercrime Law of 2023 in Jordan, introduced as part of King Abdullah II’s modernization reforms, will negatively impact LGBTQ+ people by restricting encryption and anonymity in digital communications, and criminalizing free speech through overly broad and vaguely defined terms. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. 

For many countries across Africa, and indeed the world, anti-LGBTQ+ discourses and laws can be traced back to colonial rule. These laws have been used to imprison, harass, and intimidate LGBTQ+ individuals. In May 2023, Ugandan President Yoweri Museveni signed into law the extremely harsh Anti-Homosexuality Act 2023. It imposes, for example, a 20-year sentence for the vaguely worded offense of “promoting” homosexuality. Such laws are not only an assault on the rights of LGBTQ+ people to exist, but also a grave threat to freedom of expression. They lead to more censorship and surveillance of online LGBTQ+ speech, the latter of which will lead to more self-censorship, too.

Ghana’s draft Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill 2021 goes much further. It threatens up to five years in jail to anyone who publicly identifies as LGBTQ+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The bill assigns criminal penalties for speech posted online, and threatens online platforms—specifically naming Twitter, Facebook, and Instagram—with criminal penalties if they do not restrict pro-LGBTQ+ content. If passed, Ghanaian authorities could also probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. EFF this year joined other human rights groups to oppose this law.

Taking inspiration from Uganda and Ghana, a new proposed law in Kenya—the Family Protection Bill 2023—would impose ten years imprisonment for homosexuality, and life imprisonment for “aggravated homosexuality.” The bill also allows for the expulsion of refugees and asylum seekers who breach the law, irrespective of whether the conduct is connected with asylum requests. Kenya today is the sole country in East Africa to accept LGBTQ+ individuals seeking refuge and asylum without questioning their sexual orientation; sadly, that may change. EFF has called on the authorities in Kenya and Ghana to reject their respective repulsive bills, and for authorities in Uganda to repeal the Anti-Homosexuality Act.

2023 was a challenging year for the digital rights of LGBTQ+ people. But we are optimistic that in the year to come, LGBTQ+ people and their allies, working together online and off, will make strides against censorship, surveillance, and discrimination.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Year In Review: Google’s Corporate Paternalism in The Browser

Mon, 01/01/2024 - 8:15am

It’s a big year for the oozing creep of corporate paternalism and ad-tracking technology online. Google and its subsidiary companies have tightened their grips on the throat of internet innovation, all while employing the now familiar tactic of marketing these things as beneficial for users. Here we’ll review the most significant changes this year, all emphasizing the point that browser privacy tools (like Privacy Badger) are more important than ever.

Manifest V2 to Manifest V3: Final Death of Legacy Chrome Extensions

Chrome, the most popular web browser by all measurements, recently announced the official death date for Manifest V2, hastening the reign of its janky successor, Manifest V3. We've been complaining about this since the start, but here's the gist: the finer details of MV3 have gotten somewhat better over time (namely that it won't completely break all privacy extensions). However, what security benefits it has are bought by limiting what all extensions can do. Chrome could invest in a more robust extension review process. Doing so would protect both innovation and security, but it’s clear that the true intention of this change is somewhere else. Put bluntly: Chrome, a browser built by an advertising company, has positioned itself as the gatekeeper for in-browser privacy tools, the sole arbiter of how they should be designed. Considering that Google’s trackers are present on at least 85% of the top 50,000 websites, contributing to an overall profit of approximately 225 billion dollars in 2022, this is an unsurprising, yet still disappointing, decision.

For what it's worth, Apple's Safari browser imposes similar restrictions to allegedly protect Safari users from malicious extensions. While it’s important to protect users from said malicious extensions, it’s equally important to honor their privacy.

Topics API

This year also saw the rollout of Google's planned "Privacy Sandbox" project, which also uses a lot of mealy-mouthed marketing to justify its questionable characteristics. While it will finally get rid of third-party cookies, an honestly good move, it is replacing that form of tracking with another called the "Topics API." At best, this reduces the number of parties that are able to track a user through the Chrome browser (though we aren’t the only privacy experts casting doubt toward its so-called benefits). But it limits tracking so it's only done by a single powerful party, Chrome itself, who then gets to dole out its learnings to advertisers that are willing to pay. This is just another step in transforming the browser from a user agent to an advertising agent.

Privacy Badger now disables the Topics API by default.

YouTube Blocking Access for Users With Ad-Blockers

Most recently, people with ad-blockers began to see a petulant message from Youtube when trying to watch a video. The blocking message gave users a countdown until they would no longer be able to use the site unless they disabled their ad-blockers. Privacy and security benefits be damned. YouTube, a Google owned company which saw its own all-time high in third quarter advertising revenue (a meager 8 billion dollars), has no equivocal announcement laden with deceptive language for this one. If you’re on Chrome or a Chromium-based browser, expect YouTube to be broken unless you turn off your ad-blocker.

Privacy Tools > Corporate Paternalism

Obviously this all sucks. User security shouldn’t be bought by forfeiting privacy. In reality, one is deeply imbricated with the other. All this bad decision-making drives home how important privacy tools are. Privacy Badger is one of many. It’s not just that Privacy Badger is built to protect the disempowered users, that it's a plug-n-play tool working quietly (but ferociously) behind the scenes to halt the tracking industry, but that it exists in an ecosystem of other like minded privacy projects that complement each other. Where one tool might miss, another hones in.

This year, Privacy Badger has unveiled exciting support projects and new features:

  • Badger Swarm revolutionized Privacy Badger’s learning capabilities
  • Google's link-tracking system is now thwarted in a recent update
  • Privacy Badger’s widget replacement mechanism has seen some major improvements

Until we have comprehensive privacy protections in place, until corporate tech stops abusing our desires to not be snooped on, privacy tools must be empowered to make up for these harms. Users deserve the right to choose what privacy means to them, not have that decision made by an advertising company like Google.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

How To Fight Bad Patents: 2023 Year In Review

Sun, 12/31/2023 - 9:14am

At EFF, we believe that all the rights we have in the offline world–to speak freely, create culture, play games, build things and do business–must hold up in the digital world, as well. 

EFF’s longstanding project of fighting for a more balanced, just patent system has always borne free expression in mind. And patent trolls, who simply use intellectual property (IP) rights to extract money from others, continue to be a barrier to people who want to freely innovate, or even just use technology. 

Defending IPR 

The inter partes review (IPR) process that Congress created about a decade ago is far from perfect, and we’ve supported a few ideas that would make it stronger. But overall, IPR has been a big step forward for limiting the damage of wrongly granted patents. Thousands of patent claims have been canceled through this process, which uses specialized administrative judges and is considerably faster and less expensive than federal courts. 

And IPR does no harm to legitimate patent holders. In fact, it only affects a tiny proportion of patents at all. In fiscal year 2023, there were 392 patents that were partially invalidated, and 133 patents that were fully invalidated. That’s out of a universe of an estimated 3.8 million “live” patents, according to the U.S. Patent and Trademark Office’s (USPTO) own data. 

Patent examiners have less than 20 hours, on average, to go through the entire review process for a particular patent application. The process ends with the patent applicant getting a limited monopoly from the government–a monopoly right that’s now given out more than 300,000 times per year. It only makes sense to have some type of post-grant review system to challenge the worst patents at the patent office. 

Despite this, patent trolls and other large, aggressive patent holders are determined to roll back the IPR process. This year, they lobbied the USPTO to begin a process that would allow wrongheaded rule changes that would severely threaten access to the IPR process. 

EFF, allied organizations, and tens of thousands of individuals wrote to the U.S. Patent Office opposing the proposed rules, and insisting that patent challenges should remain open to the public. 

We’re also opposing an even more extreme set of rule changes to IPR that has been unfortunately put forward by some key Senators. The PREVAIL Act would sharply limit IPR to only the immediately affected parties, and bar groups like EFF from accessing IPR at all. (A crowdfunded IPR process is how we shut down the dangerous “podcasting” patent.) 

Defending Alice

The Supreme Court’s 2014 decision in Alice v. CLS Bank barred patents that were nothing more than abstract ideas with computer jargon added in. Using the Alice test, federal courts have kicked out a rogue’s gallery of hundreds of the worst patents, including patents claiming “matchmaking”, online picture menus, scavenger hunts, and online photo contests

Dozens of individuals and small businesses have been saved by the Alice precedent, which has done a decent job of stopping the worst computer patents from surviving–at least when a defendant can afford to litigate the case. 

Unfortunately, certain trade groups keep pushing to roll back the Alice framework. For the second year in a row, we saw the introduction of a bill called the Patent Eligibility Restoration Act. This proposal would reverse course not only on the Alice rule, but also authorize the patenting of human genes that currently cannot be patented thanks to another Supreme Court case, AMP v. Myriad. It would “restore” the absolute worst patents on computer technology, and on human genes. 

We also called out the U.S. Solicitor General when that office wrote a shocking brief siding with a patent troll, suggesting that the Supreme Court re-visit Alice. 

The Alice precedent protects everyday internet users. We opposed the Solicitor General when she came out against users, and we’ll continue to strongly oppose PERA

Until our patent laws get the kind of wholesale change we have advocated for, profiteers and scam artists will continue to claim they “own” various types of basic internet use. That myth is wrong, it hurts innovation, and it hurts free speech. With your help, EFF remains a bulwark against this type of patent abuse.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Taking Back the Web with Decentralization: 2023 in Review

Sun, 12/31/2023 - 9:12am

When a system becomes too tightly-controlled and centralized, the people being squeezed tend to push back to reclaim their lost autonomy. The internet is no exception. While the internet began as a loose affiliation of universities and government bodies, that emergent digital commons has been increasingly privatized and consolidated into a handful of walled gardens. Their names are too often made synonymous with the internet, as they fight for the data and eyeballs of their users.

In the past few years, there's been an accelerating swing back toward decentralization. Users are fed up with the concentration of power, and the prevalence of privacy and free expression violations, and many users are fleeing to smaller, independently operated projects.

This momentum wasn’t only seen in the growth of new social media projects. Other exciting projects have emerged this year, and public policy is adapting.  

Major gains for the Federated Social Web

After Elon Musk acquired Twitter (now X) at the end of 2022,  many people moved to various corners of the “IndieWeb” at an unprecedented rate. It turns out those were just the cracks before the dam burst this year. 2023 was defined as much by the ascent of federated microblogging as it was by the descent of X as a platform. These users didn't just want a drop-in replacement for twitter, they wanted to break the major social media platform model for good by forcing hosts to compete on service and respect.

The other major development in the fediverse came from a seemingly unlikely source—Meta.

This momentum at the start of the year was principally seen in the fediverse, with Mastodon. This software project filled the microblogging niche for users leaving Twitter, while conveniently being one of the most mature projects using the ActivityPub protocol, the basic building block at the heart of interoperability in the many fediverse services.

Filling a similar niche, but built on the privately developed Authenticated Transfer (AT) Protocol, Bluesky also saw rapid growth despite remaining invite-only and not-yet being open to interoperating until next year. Projects like Bridgy Fed are already working to connect Bluesky to the broader federated ecosystem, and show some promise of a future where we don’t have to choose between using the tools and sites we prefer and connecting to friends, family, and many others. 

The other major development in the fediverse came from a seemingly unlikely source—Meta.  Meta owns Facebook and Instagram, which have gone to great lengths to control user data—even invoking privacy-washing claims to maintain their walled gardens. So Meta’s launch of Threads in July, a new microblogging site using the fediverse’s ActivityPub protocol, was surprising. After an initial break-out success, thanks to bringing Instagram users into the new service, Threads is already many times larger than the fediverse and Bluesky combined. While such a large site could mean federated microblogging joins federated direct messages (email) in the mainstream, Threads has not yet interoperated, and may create a rift among hosts and users wary of Meta’s poor track record in protecting user privacy and content moderation

We also saw the federation of social news aggregation. In June, Reddit outraged its moderators and third party developers by updating its API pricing policy to become less interoperable. This outrage manifested into a major platform-wide blackout protesting the changes and the unfair treatment of the unpaid and passionate volunteers who make the site worthwhile. Again, users turned to the maturing fediverse as a decentralized refuge, specifically the more reddit-like cousins of Mastodon, Lemmy and Kbin. Reddit, echoing Twitter once again, also came under fire for briefly banning users and subreddits related to these fediverse alternatives. While the protests continued well beyond their initial scope, and continued to remain in the public eye, order was eventually restored. However, the formerly fringe alternatives in the fediverse continue to be active and improving.

Some of our friends are hard at work figuring out what comes next.

Finally, while these projects made great strides in gaining adoption and improving usability, many remain generally small and under-resourced. For the decentralized social web to succeed, it must be sustainable and maintain high standards for how users are treated and safeguarded. These indie hosts face similar liability risks and governmental threats as the billion dollar companies. In a harrowing example we saw this year, an FBI raid on a Mastodon server admin for unrelated reasons resulted in the seizure of an unencrypted server database. It’s a situation that echoes EFF’s founding case over 30 years ago, Steve Jackson Games v. Secret Service, and it underlines the need for small hosts to be prepared to guard against government overreach.

With so much momentum towards better tools and a wider adoption of better standards, we remain optimistic about the future of these federated projects.

Innovative Peer-to-Peer Apps

This year has also seen continued work on components of the web that live further down the stack, in the form of protocols and libraries that most people never interact with but which enable the decentralized services that users rely on every day. The ActivityPub protocol, for example, describes how all the servers that make up the fediverse communicate with each other. ActivityPub opened up a world of federated decentralized social media—but progress isn't stopping there.

Some of our friends are hard at work figuring out what comes next. The Veilid project was officially released in August, at DEFCON, and the Spritely project has been throwing out impressive news and releases all year long. Both projects promise to revolutionize how we can exchange data directly from person to person, securely and privately, and without needing intermediaries. As we wrote, we’re looking forward to seeing where they lead us in the coming year.

The European Union’s Digital Markets Act went into effect in May of 2023, and one of its provisions requires that messaging platforms greater than a certain size must interoperate with other competitors. While each service with obligations under the DMA could offer its own bespoke API to satisfy the law’s requirements, the better result for both competition and users would be the creation of a common protocol for cross-platform messaging that is open, relatively easy to implement, and, crucially, maintains end-to-end encryption for the protection of end users. Fortunately, the More Instant Messaging Interoperability (MIMI) working group at the Internet Engineering Task Force (IETF) has taken up that exact challenge. We’ve been keeping tabs on the group and are optimistic about the possibility of open interoperability that promotes competition and decentralization while protecting privacy.

EFF on DWeb Policy

DWeb Camp 2023

The “star-studded gala” (such as it is) of the decentralized web, DWeb Camp, took place this year among the redwoods of Northern California over a weekend in late June. EFF participated in a number of panels focused on the policy implications of decentralization, how to influence policy makers, and the future direction of the decentralized web movement. The opportunity to connect with others working on both policy and engineering was invaluable, as were the contributions from those living outside the US and Europe.  

Blockchain Testimony

Blockchains have been the focus of plenty of legislators and regulators in the past handful of years, but most of the focus has been on the financial uses and implications of the tool. EFF had a welcome opportunity to direct attention toward the less-often discussed other potential uses of blockchains when we were invited to testify before the United States House Energy and Commerce Committee Subcommittee on Innovation, Data, and Commerce. The hearing focused specifically on non-financial uses of blockchains, and our testimony attempted to cut through the hype to help members of Congress understand what it is and how and when it can be helpful while being clear about its potential downsides. 

The overarching message of our testimony was that blockchain at the end of the day is just a tool and, just as with other tools, Congress should refrain from regulating it specifically because of what it is. The other important point we made was that the individuals that contribute open source code to blockchain projects should not, absent some other factor, be the ones held responsible for what others do with the code they write.

A decentralized system means that individuals can “shop” for the moderation style that best suits their preferences.

Moderation in Decentralized Social Media

One of the major issues brought to light by the rise of decentralized social media such as Bluesky and the fediverse this year has been the promises and complications of content moderation in a decentralized space. On centralized social media, content moderation can seem more straightforward. The moderation team has broad insight into the whole network, and, for the major platforms most people are used to, these centralized services have more resources to maintain a team of moderators. Decentralized social media has its own benefits when it comes to moderation, however. For example, a decentralized system means that individuals can “shop” for the moderation style that best suits their preferences. This community-level moderation may scale better than centralized models, as moderators have more context and personal investment in the space

But decentralized moderation is certainly not a solved problem, which is why the Atlantic Council created the Task Force for a Trustworthy Future Web. The Task Force started out by compiling a comprehensive report on the state of trust and safety work in social media and the upcoming challenges in the space. They then conducted a series of public and private consultations focused on the challenges of content moderation in these new platforms. Experts from many related fields were invited to participate, including EFF, and we were excited to offer our thoughts and to hear from the other assembled groups. The Task Force is compiling a final report that will synthesize the feedback and which should be out early next year.

The past year has been a strong one for the decentralization movement. More and more people are realizing that the large centralized services are not all there is to the internet, and exploration of alternatives is happening at a level that we haven’t seen in at least a decade. New services, protocols, and governance models are also popping up all the time. Throughout the year we have tried to guide newcomers through the differences in decentralized services, inform public policies surrounding these technologies and tools, and help envision where the movement should grow next. We’re looking forward to continuing to do so in 2024.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

Sat, 12/30/2023 - 10:58am

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Fighting European Threats to Encryption: 2023 Year in Review 

Sat, 12/30/2023 - 9:42am

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures. 

But we pushed back in the EU, and so far, we’ve succeeded. EFF spent this year fighting hard against an EU proposal (text) that, if it became law, would have been a disaster for online privacy in the EU and throughout the world. In the name of fighting online child abuse, the European Commission, the EU’s executive body, put forward a draft bill that would allow EU authorities to compel online services to scan user data and check it against law enforcement databases. The proposal would have pressured online services to abandon end-to-end encryption. The Commission even suggested using AI to rifle through peoples’ text messages, leading some opponents to call the proposal “chat control.”

EFF has been opposed to this proposal since it was unveiled last year. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. 

Our message broke through. In November, a key EU committee adopted a position that bars mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification, which would have amounted to a mandate to show ID before you get online; age verification can erode a free and anonymous internet for both kids and adults. 

We’ll continue to monitor the EU proposal as attention shifts to the Council of the EU, the second decision-making body of the EU. Despite several Member States still supporting widespread surveillance of citizens, there are promising signs that such a measure won’t get majority support in the Council. 

Make no mistake—the hard-fought compromise in the European Parliament is a big victory for EFF and our supporters. The governments of the world should understand clearly: mass scanning of peoples’ messages is wrong, and at odds with human rights. 

A Wrong Turn in the U.K.

EFF also opposed the U.K.’s Online Safety Bill (OSB), which passed and became the Online Safety Act (OSA) this October, after more than four years on the British legislative agenda. The stated goal of the OSB was to make the U.K. the world’s “safest place” to use the internet, but the bill’s more than 260 pages actually outline a variety of ways to undermine our privacy and speech. 

The OSA requires platforms to take action to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. The U.K. government said that content will only be scanned to check for specific categories of content. In one of the final OSB debates, a representative of the government noted that orders to scan user files “can be issued only where technically feasible,” as determined by the U.K. communications regulator, Ofcom. 

But as we’ve said many times, there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. 

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September, with anti-encryption measures intact. But the story doesn't end here. The OSA remains vague about what exactly it requires of platforms and users alike. Ofcom must now take the OSA and, over the coming year, draft regulations to operationalize the legislation. 

The public understands better than ever that government efforts to “scan it all” will always undermine encryption, and prevent us from having a safe and secure internet. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

First, Let’s Talk About Consumer Privacy: 2023 Year in Review

Fri, 12/29/2023 - 2:53pm

Whatever online harms you want to alleviate on the internet today, you can do it better—with a broader impact—if you enact strong consumer data privacy legislation first. That is a grounding principle that has informed much of EFF’s consumer protection work in 2023.

While consumer privacy will not solve every problem, it is superior to many other proposals that attempt to address issues like child mental health or foreign government surveillance. That is true for two reasons: well written consumer privacy laws address the root source of corporate surveillance, and they can withstand constitutional scrutiny.

EFF’s work on this issue includes: (1) advocating for strong comprehensive consumer data privacy laws; (2) fighting bad laws; (3) protecting existing sectoral privacy laws.

Advocating for Strong Comprehensive Consumer Data Privacy


This year, EFF released a report titled “Privacy First: A Better Way to Address Online Harms.” The report listed the key pillars of a strong privacy law (like no online behavioral ads and minimization) and how these principles can help address current issues (like protecting children’s mental health or reproductive health privacy).

We highlighted why data privacy legislation is a form of civil rights legislation and why adtech surveillance often feeds government surveillance.

And we made the case why well-written privacy laws can be constitutional when they regulate the commercial processing of personal data; that personal data is private and not a matter of public concern; and the law is tailored to address the government’s interest in privacy, free expression, security, and guarding against discrimination.

Fighting Bad Laws Based in Censorship of Internet Users


We filed amicus briefs in lawsuits challenging laws in Arkansas and Texas that required internet users to submit to age verification before accessing certain online content. These challenges continue to make their way through the courts, but they have so far been successful. We plan to do the same in a case challenging California’s Age Appropriate Design Code, while cautioning the court not to cast doubt on important privacy principles.

We filed a similar amicus brief in a lawsuit challenging Montana’s TikTok ban, where a federal court recently ruled that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content.

Protecting Existing Sectoral Laws


EFF is also gearing up to file an amicus brief supporting the constitutionality of the federal law called the Video Privacy Protection Act, which limits how video providers can sell or share their users’ private viewing data with third-party companies or the government. While we think a comprehensive privacy law is best, we support strong existing sectoral laws that protect data like video watch history, biometrics, and broadband use records.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Fighting For Your Digital Rights Across the Country: Year in Review 2023

Fri, 12/29/2023 - 2:42pm

EFF works every year to improve policy in ways that protect your digital rights in states across the country. Thanks to the messages of hundreds of EFF members across the country, we've spoken up for digital rights this year from Sacramento to Augusta.

Much of EFF's state legislative work has, historically, been in our home state of California—also often the most active state on digital civil liberties issues. This year, the Golden State passed several laws that strengthen consumer digital rights.

Two major laws we supported stand out in 2023. The first is S.B. 244, authored by California Sen. Susan Eggman, which makes it easier for individuals and independent repair shops to access materials and parts needed for maintenance on electronics and appliances. That means that Californians with a broken phone screen or a busted washing machine will have many more options for getting them fixed. Even though some electronics are not included, such as video game consoles, it still raises the bar for other right-to-repair bills.

S.B. 244 is one of the strongest right-to-repair laws in the country, doggedly championed by a group of advocates led by the California Public Interest Research Group, and we were proud to support it.

Another significant win comes with the signing of S.B. 362, also known as the CA Delete Act, authored by California Sen. Josh Becker. Privacy Rights Clearinghouse and Californians for Consumer Privacy led the fight on this bill, which builds on the state's landmark data privacy law and makes it easier for Californians to control their data through the state's data broker registry.

In addition to these wins, several other California bills we supported are now law. These include a measure that will broaden protections for immigration status data and one to facilitate better broadband access.

Health Privacy Is Data Privacy

States across the country continue to legislate at the intersection of digital privacy and reproductive rights. Both in California and beyond, EFF has worked with reproductive justice activists, medical practitioners, and other digital rights advocates to ensure that data from apps, electronic health records, law enforcement databases, and social media posts are not weaponized to prosecute those seeking or aiding those who seek reproductive or gender-affirming care. 

While some states are directly targeting those who seek this type of health care, other states are taking different approaches to strengthen protections. In California, EFF supported a bill that passed into law—A.B. 352, authored by CA Assemblymember Rebecca Bauer-Kahan—which extended the protections of California's health care data privacy law to apps such as period trackers. Washington, meanwhile, passed the "My Health, My Data Act"—H.B. 1155, authored by WA Rep. Vandana Slatter—that, among other protections, prohibits the collection of health data without consent. While EFF did not take a position on H.B. 1155, we do applaud the law's opt-in consent provisions and encourage other states to consider similar bills.

Consumer Privacy Bills Could Be Stronger

Since California passed the California Consumer Privacy Act in 2018, several states have passed their own versions of consumer privacy legislation. Unfortunately, many of these laws have been more consumer-hostile and business-friendly than EFF would like to see. In 2023, eight states—Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee and Texas— passed their own versions of broad consumer privacy bills.

EFF did not support any of these laws, many of which can trace their lineage to a weak Virginia law we opposed in 2021. Yet not all of them are equally bad.

For example, while EFF could not support the Oregon bill after a legislative deal stripped it of its private right of action, the law is a strong starting point for privacy legislation moving forward. While it has its flaws, unique among all other state privacy laws, it requires businesses to share the names of actual third parties, rather than simply the categories of companies that have your information. So, instead of knowing a "data broker" has your information and hitting a dead end in following your own data trail, you can know exactly where to file your next request. EFF participated in a years-long process to bring that bill together, and we thank the Oregon Attorney General's office for their work to keep it as strong as it is.

EFF also wants to give plaudits to Montana for another bill—a strong genetic privacy bill passed this year. The bill is a good starting point for other states, and shows Montana is thinking critically about how to protect people from overbroad data collection and surveillance.

Of course, one post can't capture all the work we did in states this year. In particular, the curious should read our upcoming Year in Review post specifically focused on children’s privacy, speech, and censorship bills introduced in states this year. But EFF was able to move the ball forward on several issues this year—and will continue to fight for your digital rights in statehouses from coast to coast.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

In the Trenches of Broadband Policy: 2023 Year In Review

Fri, 12/29/2023 - 2:31pm

EFF has long advocated for affordable, accessible, future-proof internet access for all. Nearly 80% of Americans already consider internet access to be as essential as water and electricity, so as our work, health services, education, entertainment, social lives, etc. increasingly have an online component, we cannot accept a future where the quality of your internet access—and so the quality of your connection to these crucial facets of your life—is determined by geographic, socioeconomic, or otherwise divided lines. 

Lawmakers recognized this during the pandemic and set in motion once-in-a-generation opportunities to build the future-proof fiber infrastructure needed to close the digital divide once and for all.

As we exit the pandemic however, that dedication is wavering. Monopolistic internet service providers (ISPs), with business models that created the digital divide in the first place, are doing everything they can to maintain control over the broadband market—including stopping the construction of any infrastructure they do not control. Further, while some government agencies are continuing to make rules to advance equitable and competitive access to broadband, others have not. Regardless, EFF will continue to fight for the vision we’ve long advocated.

New York City Abandons Revolutionary Fiber Plan 

This year, New York City Mayor Eric Adams turned his back on the future of broadband accessibility for New Yorkers.

In 2020, then Mayor Bill de Blasio unveiled New York City’s Internet Master Plan to deliver broadband to low-income New Yorkers by investing in public fiber infrastructure. Public fiber infrastructure would have been an investment in New York City’s future, a long-term solution to permanently bridge the digital divide and bring affordable, accessible future-proof service to New Yorkers for generations to come. This kind of public infrastructure, especially if provisioned on an open and affordable basis dramatically lowers barriers to entry, which in turn creates competition, lower prices, and better customer service in the market as a whole.

Mayor Eric Adams not only abandoned this plan, but subsequently introduced a three-year $90 million dollar subsidy plan called Big Apple Connect. Instead of building physical infrastructure to bridge the digital divide for decades to come, New York City will now subsidize NYC’s oligopolist ISPs, Charter Spectrum and Altice, to continue doing business as usual. This does nothing to address the needs of underinvested communities whose legacy networks physically cannot handle a fast connection. All it does is put taxpayer dollars into corporate pockets instead of into infrastructure that actually serves the people.

The Adams administration even asked a cooperatively-run community based ISP that had been a part of the Internet Master Plan and had already installed fiber infrastructure to dismantle their network so the city can further contract with the big ISPs.

California Wavers On Its Commitments

New York City is not the only place public commitment to bridging the digital divide has wavered. 

In 2021, California invested nearly $7 billion to bring affordable fiber infrastructure to all Californians. As part of this process California’s Department of Technology was meant to build 10,000 miles of middle-mile fiber infrastructure, the physical foundation through which community-level last mile connections would be built to serve underserved communities for decades to come.

Unfortunately, in August the Department of Technology not only reduced the number of miles to be built but also cut off entire communities that had traditionally been underserved. Despite fierce community pushback, the Department of Technology stuck to their revised plans and awarded contracts accordingly.

Governor Newsom has promised to restore the lost miles in 2024, which EFF and California community groups intend to hold him to, but the fact remains that the reduction of miles should not have been done the way they were.

FCC Rules on Digital Discrimination and Rulemaking on Net Neutrality

On the federal level, the Federal Communications Commission finally received its fifth commissioner in Anna Gomez September of this year, allowing them to begin their rulemaking on net neutrality and promulgate rules on digital discrimination. We submitted comments on the net neutrality proceeding, advocating for a return to light-touch, targeted, and enforceable net neutrality protections for the whole country.

On digital discrimination, EFF applauds the Commission for adopting a disparate treatment as well as disparate impact standard. Companies can now be found liable for digital discrimination not only when they intentionally treat communities differently, but when the impact of their decisions—regardless of intent—affect a community differently.  Further, for the first time the Commission recognized the link between historic redlining in housing and digital discrimination, making the connection between the historic underinvestment of lower income communities of color and the continued underinvestment by the monopolistic ISPs.

Next year will bring more fights around broadband implementation. The questions will be who gets funding, whether and where infrastructure gets built, and whether long-neglected communities will finally be heard and brought into the 21st-century or left behind by public neglect or private greed. The path to affordable, accessible, future-proof internet for all will require the political will to invest in physical infrastructure and hold incumbents to nondiscrimination rules that preserve speech and competition online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Pages