EFF: Updates
Behind the Diner—Digital Rights Bytes: 2024 in Review
Although it feels a bit weird to be writing a year in review post for a site that hasn’t even been live for three months, I thought it would be fun to give a behind-the-scenes look at the work we did this year to build EFF’s newest site, Digital Rights Bytes.
Since each topic Digital Rights Bytes aims to tackle is in the form of a question, why not do this Q&A style?
Q: WHAT IS DIGITAL RIGHTS BYTES?Great question! At its core, Digital Rights Bytes is a place where you can get honest answers to the questions that have been bugging you about technology.
The site was originally pitched as ‘EFF University’ (or EFFU, pun intended) to help folks who aren’t part of our core tech-savvy base get up-to-speed on technology issues that may be affecting their everyday lives. We really wanted Digital Rights Bytes to be a place where newbies could feel safe learning about internet freedom issues, get familiar with EFF’s work, and find out how to get involved, without feeling too intimidated.
Q: WHY DOES THE SITE LOOK SO DIFFERENT FROM OTHER EFF WORK?With our main goal of attracting new readers, it was crucial to brand Digital Rights Bytes differently from other EFF projects. We wanted Digital Rights Bytes to feel like a place where you and your friend might casually chat over milkshakes—while being served pancakes by a friendly robot. We took that concept and ran with it, going forward with a full diner theme for the site. I mean, imagine the counter banter you could have at the Digital Rights Bytes Diner!
Take a look at the Digital Rights Bytes counter!
As part of this concept, we thought it made sense for each topic to be framed as a question. Of course, at EFF, we get a ton of questions from supporters and other folks online about internet freedom issues, including from our own family and friends. We took some of the questions we see fairly often, then decided which would be the most important—and most interesting—to answer.
The diner concept is why the site has a bright neon logo, pink and cyan colors, and a neat vintage looking background on desktop. Even the gif that plays on the home screen of Digital Rights Bytes shows our animal characters chatting ‘round the diner (more on them soon!)
Q: WHY DID YOU MAKE DIGITAL RIGHTS BYTES?Here’s the thing: technology continues to expand, evolve, and change—and it’s tough to keep up! We’ve all been the tech noob, trying to figure out why our devices behave the way they do, and it can be pretty overwhelming.
So, we thought that we could help out with that! And what better way to help educate newcomers than explaining these tech issues in short byte-sized videos:
A clip from the device repair video.
It took some time to nail down the style for the videos on Digital Rights Bytes. But, after some trial and error, we landed on using animals as our lead characters. A) because they’re adorable. B) because it helped further emphasize the shadowy figures that were often trying to steal their data or make their tech worse for them. It’s often unclear who is trying to steal our data or rig tech to be worse for the user, so we thought this was fitting.
In addition to the videos, EFF issue experts wrote concise and easy to read pages further detailing the topic, with an emphasis on linking to other experts and including information on how you can get involved.
Q: HAS DIGITAL RIGHTS BYTES BEEN SUCCESSFUL?You tell us! If you’re reading these Year In Review blog posts, you’re probably the designated “ask them every tech question in the world” person of your family. Why not send your family and friends over to Digital Rights Bytes and let us know if the site has been helpful to them!
We’re also looking to expand the site and answer more common questions you and I might hear. If you have suggestions, you should let us know here or on social media! Just use the hashtag #DigitalRightsBytes and we’ll be sure to consider it.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
NSA Surveillance and Section 702 of FISA: 2024 in Review
Mass surveillance authority Section 702 of FISA, which allows the government to collect international communications, many of which happen to have one side in the United States, has been renewed several times since its creation with the passage of the 2008 FISA Amendments Act. This law has been an incessant threat to privacy for over a decade because the FBI operates on the “finders keepers” rule of surveillance which means that it thinks because the NSA has “incidentally” collected the US-side of conversations it is not free to sift through them without a warrant.
But 2024 became the year this mass surveillance authority was not only reauthorized by a lion’s share of both Democrats and Republicans—it was also the year the law got worse.
After a tense fight, some temporary reauthorizations, and a looming expiration, Congress finally passed the Reforming Intelligence and Securing America Act (RISAA) in April, 20204. RISAA not only reauthorized the mass surveillance capabilities of Section 702 without any of the necessary reforms that had been floated in previous bills, it also enhanced its powers by expanding what it can be used for and who has to adhere to the government’s requests for data.
Where Section 702 was enacted under the guise of targeting people not on U.S. soil to assist with national security investigations, there are not such narrow limits on the use of communications acquired under the mass surveillance law. Following the passage of RISAA, this private information can now be used to vet immigration and asylum seekers and conduct intelligence for broadly construed “counter narcotics” purposes.
The bill also included an expanded definition of “Electronic Communications Service Provider” or ECSP. Under Section 702, anyone who oversees the storage or transmission of electronic communications—be it emails, text messages, or other online data—must cooperate with the federal government’s requests to hand over data. Under expanded definitions of ECSP there are intense and well-realized fears that anyone who hosts servers, websites, or provides internet to customers—or even just people who work in the same building as these providers—might be forced to become a tool of the surveillance state. As of December 2024, the fight is still on in Congress to clarify, narrow, and reform the definition of ECSP.
The one merciful change that occurred as a result of the 2024 smackdown over Section 702’s renewal was that it only lasts two years. That means in Spring 2026 we have to be ready to fight again to bring meaningful change, transparency, and restriction to Big Brother’s favorite law.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Global Age Verification Measures: 2024 in Review
EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.
Kids Experiencing Harm is Not Just an Online PhenomenaIn November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30 million in fines. This is similar to last year’s ban on social media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.
No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as online.
The internet is a valuable resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave and institute age verification tools to ensure that it happens.
Limiting Access for Kids Limits Access for EveryoneThrough this wave of age verification bills, governments around the world are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning sexual content usually hurt marginalized communities and groups that serve them the most. History shows that over-censorship is inevitable.
This year, Canada also introduced an age verification measure, bill S-210, which seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. This was introduced to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” But requiring people of all ages to show ID to get online won’t help women or young people. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. This creates a legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.
Without Comprehensive Privacy Protections, These Bills Exacerbate Data SurveillanceUnder mandatory age verification requirements, users will have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. Millions of adult internet users would also be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID.
Online age verification is not like flashing an ID card in person to buy particular physical items. In places that lack comprehensive data privacy legislation, the risk of surveillance is extensive. First, a person who submits identifying information online can never be sure if websites will keep that information, or how that information might be used or disclosed. Without requiring all parties who may have access to the data to delete that data, such as third-party intermediaries, data brokers, or advertisers, users are left highly vulnerable to data breaches and other security harms at companies responsible for storing or processing sensitive documents like drivers’ licenses.
Second, and unlike in-person age-gates, the most common way for websites to comply with a potential verification system would be to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information. In a brief to a U.S. court, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns. People shouldn't have to disclose to the government what websites they're looking at—which could reveal sexual preferences or other extremely private information—in order to get information from that website.
These proposals are coming to the U.S. as well. We analyzed various age verification methods in comments to the New York Attorney General. None of them are both accurate and privacy-protective.
The Scramble to Find an Effective Age Verification Method Shows There Isn't OneThe European Commission is also currently working on guidelines for the implementation of the child safety article of the Digital Services Act (Article 28) and may come up with criteria for effective age verification. In parallel, the Commission has asked for proposals for a 'mini EU ID wallet' to implement device-level age verification ahead of the expected roll out of digital identities across the EU in 2026. At the same time, smaller social media companies and dating platforms have for years been arguing that age verification should take place at the device or app-store level, and will likely support the Commission's plans. As we move into 2025, EFF will continue to follow these developments as the Commission’s apparent expectation on porn platforms to adopt age verification to comply with their risk mitigation obligations under the DSA becomes clearer.
Mandatory age verification is the wrong approach to protecting young people online. In 2025, EFF will continue urging politicians around the globe to acknowledge these shortcomings, and to explore less invasive approaches to protecting all people from online harms.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review
The phrase “move fast and break things” carries pretty negative connotations in these days of (Big) techlash. So it’s surprising that state and federal policymakers are doing just that with the latest big issue in tech and the public consciousness: generative AI, or more specifically its uses to generate deepfakes.
Creators of all kinds are expressing a lot of anxiety around the use of generative artificial intelligence, some of it justified. The anxiety, combined with some people’s understandable sense of frustration that their works were used to develop a technology that they fear could displace them, has led to multiple lawsuits.
But while the courts sort it out, legislators are responding to heavy pressure to do something. And it seems their highest priority is to give new or expanded rights to protect celebrity personas–living or dead–and the many people and corporations that profit from them.
The broadest “fix” would be a federal law, and we’ve seen several proposals this year. The two most prominent are NO AI FRAUD (in the House of Representatives) and NO FAKES (in the Senate). The first, introduced in January 2024, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. It also characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs because Section 230 immunity does not apply to federal IP claims. NO FAKES, introduced in April, is not significantly different.
There’s a host of problems with these bills, and you can read more about them here and here.
A core problem is that these bills are modeled on the broadest state laws recognizing a right of publicity. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. It’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games.
And states are taking swift action to further expand publicity rights. Take this year’s digital replica law in Tennessee, called the ELVIS Act because of course it is. Tennessee already gave celebrities (and their heirs) a property right in their name, photograph, or likeness. The new law extends that right to voices, expands the risk of liability to include anyone who distributes a likeness without permission and limits some speech-protective exceptions.
Across the country, California couldn’t let Tennessee win the race for most restrictive/protective rules for famous people (and their heirs). So it passed AB 1836, creating liability for anyo ne person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, without consent. There are a number of exceptions, which is better than nothing, but those exceptions are pretty confusing for people who don’t have lawyers to help sort them out.
These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction.
We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Electronic Frontier Alliance Fought and Taught Locally: 2024 in Review
The EFF-chaired Electronic Frontier Alliance (EFA) has had a big year! EFA is a loose network of local groups fighting for digital rights in the United States. With an ever-increasing roster of allies across the country, including significant growth on university campuses, EFA has also undergone a bit of a facelift. With the new branding comes more resources to support local organizing and popular education efforts around the country.
If you’re a member of a local or state group in the United States that believes in digital rights, please learn more at our FAQ page. EFA groups include hackers, advocates, security educators, tech professionals, activists young and old, and beyond. If you think it would make a good fit, please fill out an application here. The Alliance has scores of members, which all did great work this year. This review highlights just a few.
A new look for EFA[steal a few images from the rebrand blog here?]
This past July, the organizing team completed the much needed EFA rebrand project with a brand new website. Thanks to the work of EFF’s Engineering and Design team, organizers now have up-to-date merch, pamphlets, and useful organizer toolkits for alliance members with a range of levels of organizing experience. Whether your group wants to lead advocacy letters to groups needing basic press strategies or organizing on college campuses we have resources to help. We also updated our allies directory to better showcase our local members, and make it easier for activists to find groups and get involved. We also put together a bluesky starter kit to make it easy to follow members into the new platform. This is a major milestone in our effort to build useful resources for the network, which we will continue to maintain and expand in the years ahead.
More local groups heeded the call:The alliance continued to grow, especially on college campuses, creating opportunities for some fun cross-campus collaborations in the year ahead. This year, nine local groups across eight states joined up:
- Stop Surveillance City, Seattle, WA: Stop Surveillance City is fighting against mass surveillance, criminalization and incarceration, and further divestment from people-centered policies. They advocate for investment in their communities and want to increase programs that address the root causes of violence.
- Cyber Security Club @FSU, Tallahassee, FL: The Cyber Security Club is a student group sharing resources to get students started in cybersecurity and competing in digital Capture the Flag (CTF) competitions.
- UF Student Infosec Team (UFSIT), Gainesville, FL: UFSIT is the cybersecurity club at the University of Florida. They are student-led and passionate about all things cybersecurity, and their goal is to provide a welcoming environment for students to learn more about all areas of information security, including penetration testing, reverse engineering, vulnerability research, digital forensics, and more.
- NICC, Newark, NJ: NICC is the New Jersey Institute of Technology’s official information & cybersecurity club. As a student-run organization, NICC started as a way to give NJIT students interested in cybersecurity, ethical hacking, and CTFs a group that would help them learn, grow, and hone their skills.
- DC919, Raleigh, NC: DEF CON Group 919 is a community group in the Raleigh/Durham area of North Carolina, providing a gathering place for hacking discussions, conference support, and workshop testing.
- Community Broadband PDX, Portland, OR: Their mission is to guide Portlanders to create a new option for fast internet access: publicly-owned and transparently operated, affordable, secure, fast, and reliable broadband infrastructure that is always available to every neighborhood and community.
- DC215, Philadelphia, PA: DC215 is another DEF CON group advancing knowledge and education with those interested in science, technology, and other areas of information security through project collaborations, group gatherings, and group activities to serve their city.
- Open Austin, Austin, TX: Open Austin's mission is to end disparities in Austin in access to technology. It envisions a city that respects and serves all people, by facilitating issues-based conversations between government and city residents, providing service to local community groups that leverage local expertise, and advocating for policy that utilizes technology to improve the community.
- Encode Justice Georgia: Encode Justice GA is the third Encode Justice to join EFA, mostly made up of high school students learning the tools of organizing by focusing on issues like algorithmic machine-learning and law enforcement surveillance.
This year, we talked to the North Carolina chapter of Encode Justice, a network that includes over 1,000 high school and college students across over 40 U.S. states and 30 countries. A youth-led movement for safe, equitable AI, their mission is mobilizing communities for AI policies that are aligned with human values. The NC chapter is has led educational workshops, policy memos, and legislative campaigns on both the state and& city council level, while lobbying officials and building coalitions with other North Carolinians.
Local groups continued to take on fights to defend constitutional protections against police surveillance overreach around the country. We caught up with our friends at the Surveillance Technology Oversight Project (S.T.O.P.) in New York, which litigates and advocates for privacy, working to push back against local government mass surveillance. STOP worked to pass the Public Oversight of Surveillance Technology Act in the New York City Council and used the law to uncover previously unknown NYPD surveillance contracts. This year they made significant strides in their campaign to ‘Ban the Scan’ (face recognition) in both the state assembly and the city council.
Another heavy hitter in the alliance, Lucy Parsons Labs , took the private-sector Atlanta Police Foundation to court to seek transparency over its functions on behalf of law enforcement agencies, arguing that those functions should be open to the same public records requests as the government agencies they are being used for.
Defending constitutional rights against encroachments by police agencies is an uphill battle, and our allies in San Diego’s TRUST Coalition were among those fighting to protect Community Control Over Police Surveillance requirements previously passed by their city council.
We checked-in with CCTV Cambridge on their efforts to address digital equity with their Digital Navigator program, as well as highlighting them for Digital Inclusion Week 2024. CCTV Cambridge does work across all demographics in their city. For example, they implemented a Youth Media Program where teens get paid while developing skills to become professional digital media artists. They also have a Foundational Technology program for the elderly and others who struggle with increasing demands of technology in their lives.
[steal an image from chris’ blog here?]
This has been a productive year organizing for digital rights in the Pacific Northwest. We were able to catch up with several allies in Portland, Oregon, at the PDX People’s Digital Safety Fair on the campaign to bring high-speed broadband to their city, which is led by Community Broadband PDX and the Personal TelCo Project. With six active EFA members in Portland and three in neighboring Washington state, we awere excited to watch the growing momentum for digital rights in the region.
Citizens Privacy Coalition crowdfunded a documentary on the impacts of surveillance in the Bay Area, called "Watch the Watchers." The film features EFF's Eva Galperin and addresses how to combat surveillance across policy, technological guardrails, and individual action.
Allies brought knowledge to their neighbors:The Electronic Frontiers track at the sci-fi, fantasy, and comic book-oriented Dragon*Con in Atlanta celebrated its 25th anniversary, produced in coordination with EFA member Electronic Frontiers Georgia. The digital rights component to Dragon*Con had its largest number of panels yet on a wide variety of digital rights issues, from vehicle surveillance to clampdowns against first amendment-protected activities. Members of EF-Georgia, EFF, and allied organizations presented on a variety of topics, including:
- Legal exposure from running a Mastodon Server or Instance
- Social Media Bans for Kids
- Age Verification Laws Shut Down Adult Websites in 8 States
- Steamboat Willy and the Public Domain
- Private Police Foundations
- Georgia and Constitutionally Protected Free Speech
- Passkeys
- Hacking 201
- Defending against Cell Site Simulators
- Courts Revoke FTC Ban on Non-Compete Clauses
- Civil Asset Forfeiture Overhaul
- Automated Speed Enforcement in Georgia
- Insurance rates and privacy in the era of drones and data
- The Struggle to Opt Out of Automobile Telematics
- Internet Shutdowns and Other Countries
More of this year’s Dragon*Con panels can be found at EF-Georgia’s special Dragon*Con playlist.
EFF-Austin led the alliance in recorded in-person events, with monthly expert talks in Texas and meet-ups for people in their city interested in privacy, security, and skills in tech. They worked with new EFA member Open Austin to talk about how Austinites can get involved through civic-minded technology efforts. Other discussions included:
- Data Cooperatives
- Intro to Quantum Computing
- Your car is spying on you
- Passkeys 101
- Generative AI
- Generative AI and Copyright
- Environmental Entropy
- Facial Recognition in the Moscow Smart City
- Mexican Surveillance at the Border
In complicated times, the EFA team is committed to building bridges for local groups and activists, because we build power when we work together, whether to fight for our digital rights, or to educate our neighbors on the issues and the technologies they face. In the coming year, a lot of EFA members will be focused on defending communities that are under attack, spreading awareness about the role data and digital communications play in broader struggles, and cultivating skills and knowledge among their neighbors.
To learn more about how the EFA works, please check out our FAQ page, and apply to join us.
Past EFA members profiles:
- Digital Fourth
- Restore the 4th Minnesota
- PDX Privacy interview
- Restore the Fourth MN interview
- Personal Telco Project interview
- CyPurr Collective interview
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
The Growing Intersection of Reproductive Rights and Digital Rights: 2024 in Review
Dear reader of our blog, surely by now you know the format: as we approach the end of the year, we look back on our work, count our wins, learn from our misses, and lay the groundwork strategies for a better future. It's been an intense year in the fight for reproductive rights and its intersections with digital civil liberties. Going after cops illegally sharing location data, fighting the data broker industry, and building coalitions with the broader movement for reproductive justice—we've stayed busy.
The Fight Against Warrantless Access to Real-Time Location TrackingThe location data market is an unregulated nightmare industry that poses an existential threat to everyone's privacy, but especially those embroiled in the fight for reproductive rights. In a recent blog post, we wrote about the particular dangers posed by LocateX, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices. cops shouldn't be able to buy their way around having to get a warrant for real-time location tracking of anyone they please, regardless of the context. In regressive states that ban abortion, however, the problems with LocateX illustrate just how severe the issue can be for such a large population of people.
Building Coalition Within Digital Civil Liberties and Reproductive JusticePart of our work in this movement is recognizing our lane: providing digital security tips, promoting the rights to privacy and free expression, and making connections with community leaders to support and elevate their work. This year we hosted a livestream panel featuring various next-level thinkers and reproductive justice movement leaders. Make sure to watch it if you missed it! Recognizing and highlighting our shared struggles, interests, and avenues for liberation is exactly how movements are fought for and won.
The Struggle to Stop Cops from Illegally Sharing ALPR dataIt's been a multi-year battle to stop law enforcement agencies from illegally sharing out-of-state ALPR (automatic license plate reader) data. Thankfully this year we were able to celebrate a win: a grand jury in Sacramento made the motion to investigate two police agencies who have been illegally sharing this type of data. We're glad to declare victory, but those two agencies are far from the only problem. We hope this sets a precedent that cops aren't above the law and will continue to fight for just that. This win will help us build momentum to continue this fight into the coming year.
Sharing What We Know About Digital Surveillance RisksWe'll be the first to tell you that expertise in digital surveillance threats always begins with We've learned a lot in the few years we've been researching the privacy and security risks facing this issue space, much of it gathered from conversation and trainings with on-the-ground movement workers. We gathered what we've learned from that work and distilled it into an accessible format for anyone that needs it. Behind the scenes, this research continues to inform the hands-on digital security trainings we provide to activists and movement workers.
As we proceed into an uncertain future where abortion access will continue to be a difficult struggle, we'll continue to do what we do best: standing vigilant for peoples' right to privacy, fighting bad Internet laws, protecting free speech online, and building coalition with others. Thank you for your support.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
You Can Be a Part of this Grassroots Movement 🧑💻
You ever hear the saying, "it takes a village"? I never really understood the saying until I started going to conferences, attending protests, and working on EFF's membership team.
You see, EFF's mission thrives because we are powered by people like you. Just as we fight for your privacy and free expression through grassroots advocacy, we rely on grassroots support to make all of our work possible. Will you join our movement with a small monthly donation today?
Your Support Makes the Difference
Powered by your support, EFF never needs to appease outside interests. With over 75% of our funding coming from individual donations, your contribution ensures that we can keep fighting for your rights online—no matter what. So, whether you give $5/month or include us in your planned giving, you're ensuring our team can stand up for you.
Donate by December 31 to help unlock bonus grants! Your automatic monthly or annual donation—of any amount—will count towards our Year-End Challenge and ensures you're helping us win this challenge in future years, too.
🌟 BE A PART OF THE CHANGE 🌟When you become a Sustaining Donor with a monthly or annual gift, you are not only supporting EFF's expert lawyers and activists, but you also get to choose free EFF member gear each year as a thank you from us. For just $10/month you can even choose our newest member t-shirt: Fix Copyright.
Show your support for privacy and free speech with EFF member gear.
We'd love to have you beside us for every legislative victory we share, every spotlight we shine on bad actors, and every tool we develop to keep you safe online.
Start a monthly donation with EFF today and keep the digital freedom movement strong!
Join The Digital Rights Movement
Unlock Bonus Grants Before 2025
________________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.
Surveillance Self-Defense: 2024 in Review
This year, we celebrated the 15th anniversary of our Surveillance-Self Defense (SSD) guide. How’d we celebrate? We kept at it—continuing to work on, refine, and update one of the longest running security and privacy guides on the internet.
Technology changes quickly enough as it is, but so does the language we use to describe that technology. In order for SSD to thrive, it needs careful attention throughout the year. So, we like to think of SSD as a garden, always in need of a little watering, maybe some trimming, and the occasional mowing down of dead technologies.
Brushing Up on the BasicsA large chunk of SSD exists to explain concepts around digital security in the hopes that you can take that knowledge to make your own decisions about your specific needs. As we often say, security is a mindset, not a purchase. But in order to foster that mindset, you need some basic knowledge. This year, we set out to refine some of this guidance in the hopes of making it easier to read and useful for a variety of skill levels. The guides we updated included:
- Choosing Your Tools
- Communicating with Others
- Keeping Your Data Safe
- Seven Steps to Digital Security
- Why Metadata Matters
- What Is Fingerprinting?
- How do I Protect Myself Against Malware?
If you’re looking for something a bit longer, then some of our more complicated guides are practically novels. This year, we updated a few of these.
We went through our Privacy Breakdown of Mobile Phones and updated it with more recent examples when applicable, and included additional tips at the end of some sections for actionable steps you can take. Phones continue to be one of the most privacy-invasive devices we own, and getting a handle on what they’re capable of is the first step to figuring out what risks you may face.
Our Attending a Protest guide is something we revisit every year (sometimes a couple times a year) to ensure it’s as accurate as possible. This year was no different, and while there were no sweeping changes, we did update the included PDF guide and added screenshots where applicable.
We also reworked our How to: Understand and Circumvent Network Censorship slightly to frame it more as instructional guidance, and included new features and tools to get around censorship, like utilizing a proxy in messaging tools.
New GuidesWe saw two additions to the SSD this year. First up was How to: Detect Bluetooth Trackers, our guide to locating unwanted Bluetooth trackers—like Apple AirTags or Tile—that someone may use to track your location. Both Android and iOS have made changes to detecting these sorts of trackers, but the wide array of different products on the market means it doesn’t always work as expected.
We also put together a guide for the iPhone’s Lockdown Mode. While not a feature that everyone needs to consider, it has proven helpful in some cases, and knowing what those circumstances are is an important step in deciding if it’s a feature you need to enable.
But How do I?As the name suggests, our Tool Guides are all about learning how to best protect what you do on your devices. This might be setting up two-factor authentication, turning on encryption on your laptop, or setting up something like Apple’s Advanced Data Protection. These guides tend to need a yearly look to ensure they’re up-to-date. For example, Signal saw the launch of usernames, so we went in and made sure that was added to the guide. Here’s what we updated this year:
- How to: Avoid Phishing Attacks
- How to: Enable Two-factor Authentication
- How to: Encrypt Your Computer
- How to: Encrypt Your iPhone
- How to: Use Signal
Surveillance Self-Defense isn’t just a website, it’s also a general approach to privacy and security. To that end, we often use our blog to tackle more specific questions or respond to news.
This year, we talked about the risks you might face using your state’s digital driver’s license, and whether or not the promise of future convenience is worth the risks of today.
We dove into an attack method in VPNs called TunnelVision, which showed how it was possible for someone on a local network to intercept some VPN traffic. We’ve reiterated our advice here that VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.
Location data privacy is still a major issue this year, with potential and horrific abuses of this data popping up in the news constantly. We showed how and why you should disable location sharing in apps that don’t need access to function.
As mentioned above, our SSD on protesting is a perennial always in need of pruning, but sometimes you need to plant a whole new flower, as was the case when we decided to write up tips for protesters on campuses around the United States.
Every year, we fight for more privacy and security, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review
For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU Commission brought down many regulations covering all sectors, but Big Tech has been the center of their focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act (DSA) have spread far beyond Europe.
The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for their role in disseminating illegal content.
For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying design and rules - as well as from how these services are used by the public. Measures to address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring this process remains unclear.
As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’ operations that they can’t innovate and thrive.
The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the largest, most dominant platforms.
The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and “fairness” for digital businesses.
Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem.
As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?
Fairness Regulation: Ambition and High-Stakes ClashesThere’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare come true.
Predictably, the DMA was inaugurated with a no-holds-barred dirty fight between the biggest US tech giants and European enforcers.
Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through the most sensitive personal information of every European.
In response, Facebook simply pretended the GDPR didn’t say what it clearly said, and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were contractually obliged to collect this information, because their terms and conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that information, they’d be breaking that promise.
The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services – Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent.
In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his executives are clearly expecting that they can run the same playbook again.
Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.
For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage – a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper platform.”
Platform regulation: A delicate balanceFor regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing line?
EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.
In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content, structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance!
But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead to enforcement overreach and censorship.
In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem. Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK, and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding content takedowns that reached outside the EU.
Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE 19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations prioritize ensuring respect for fundamental rights.
The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Celebrating Digital Freedom with EFF Supporters: 2024 in Review
“EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.” It can be a tough job. A lot of our time is spent fighting bad things that are happening in the world or fixing things that have been broken for a long time.
But this work is important, and we've accomplished great things this year! Thanks to your help, we pushed the USPTO to withdraw harmful patent review proposals, fought for the public's right to access police drone footage, and continue to see more and more of the web encrypted thanks to Certbot and Let’s Encrypt.
Of course, the biggest reason EFF is able to fight for privacy and free expression online is support from EFF members. Public support is not only the reason we can operate but is also a great motivator to wake up and advocate for what’s right—especially when we get to hang out with some really cool folks! And with that, I’d like to reminisce.
EFF's Bay Area FestivitiesEarly in the year we held our annual Spring Members’ Speakeasy. We invited supporters in the Bay Area to join us at Babylon Burning, where all of EFF’s t-shirts, hoodies, and much of our swag are made. There, folks got a fun opportunity to hand print their own tote bag! It was a fun opportunity to see t-shirts that even I had never seen before. Side note, EFF has a lot of mechas on members’ t-shirts.
Vintage EFF t-shirts hung across the walls at Babylon Burning.
The EFF team had a great time with EFF supporters at events throughout the year. Of course, my mind was blown seeing the questions EFF gamemasters (including the Cybertiger) came up with for both Tech Trivia and Cyberlaw Trivia. What was even more impressive was seeing how many answers teams got right at both events. During Cyberlaw Trivia, one team was able to recite 22 digits of pi, winning the tiebreaker question and the coveted first place prize!
Beating the Heat in Las VegasEFF staff with the Uber Contributor Award.
Next, one of my favorite summer pastimes beating the heat in Las Vegas, where we get to see thousands of EFF supporters for the summer security conferences—BSidesLV, Black Hat, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week. The support EFF receives during the summer security conferences always amazes me, and it’s a joy to say hi to everyone that stops by to see us. We received an award from DEF CON and even speed ran a legal case, ensuring a security researchers' ability to give their talk at the conference.
While the lawyers were handling the legal case at DEF CON, a subgroup of us had a blast participating in the EFF Benefit Poker Tournament. Fourty-six supporters and friends played for money, glory, and the future of the web—all while using these new EFF playing cards! In the end, only one winner could beat the celebrity guests, including Cory Doctorow and Deviant (even winning the literal shirt off of Deviant's back).
EFFecting ChangeThis year we also launched a new livestream series: EFFecting Change. With our initial three events, we covered recent Supreme Court cases and how they affect the internet, keeping yourself safe when seeking reproductive care, and how to protest with privacy in mind. We’ve seen a lot of support for these events and are excited to continue them next year. Oh, and no worries if you missed one—they’re all recorded here!
Congrats to Our 2024 EFF Award WinnersWe wanted to end the year in style, of course, with our annual EFF Awards. This year we gave awards to 404 Media, Carolina Botero, and Connecting Humanity—and you can watch the keynote if you missed it. We’re grateful to honor and lift up the important work of these award winners.
EFF staff and EFF Award Winners holding their trophies.
And It's All Thanks to YouThere was so much more to this year too. We shared campfire tales from digital freedom legends, the Encryptids; poked fun at bogus copyright law with our latest membership t-shirt; and hosted even more events throughout the country.
As 2025 approaches, it’s important to reflect on all the good work that we’ve done together in the past year. Yes, there’s a lot going on in the world, and times may be challenging, but with support from people like you, EFF is ready to keep up the fight—no matter what.
Many thanks to all of the EFF members who joined forces with us this year! If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Fighting For Progress On Patents: 2024 in Review
The rights we have in the offline world–to speak freely, create culture, play games, build new things and do business–must be available to us online, as well. This core belief drives EFF’s work to fight the misuse of the patent system.
Despite significant progress we’ve made over the last decade, patents, and in particular vague software patents, remain a serious threat to online rights. The median patent lawsuit isn't filed by what Americans would recognize as an ‘inventor,’ but by an anonymous limited liability company that provides no products or services, and instead uses patents to threaten others over alleged infringement. In other words, a patent troll. In the tech sector, more than 85% of patent lawsuits are filed by these “non-practicing entities.”
That’s why at EFF, we continue to help individuals and organizations fight patent threats related to everyday activities like using CAPTCHAs and picture menus, tracking packages or vehicles, teaching languages, holding online contests, or playing simple games online.
Here’s where the fight stands as we move into 2025.
Defending the Public’s Right To Challenge Bad PatentsIn 2012, recognizing the persistent problem of an overburdened patent office issuing a countless number dubious patents each year, Congress established a system called “inter partes reviews” (IPRs) to review and challenge patents. While far from perfect, IPRs have led to the cancellation of thousands of patents that should never have been granted in the first place.
It’s no surprise that big patent owners and patent trolls have long sought to dismantle the IPR system. After unsuccessful attempts to persuade federal courts to dismantle IPRs, they shifted tactics in the past 18 months, attempting to convince the U.S. Patent and Trademark Office (USPTO) to undermine the IPR system by changing the rules on who can use it.
EFF opposed these proposed changes, urging our supporters to file public comments. This effort was a resounding success. After reviewing thousands of comments, including nearly 1,000 inspired by EFF’s call to action, the USPTO withdrew its proposal.
Stopping Congress From Re-Opening The Door To The Worst PatentsThe patent system, particularly in the realm of software, is broken. For more than 20 years, the U.S. Patent Office has issued patents on basic cultural or business practices, often with little more than the addition of computer jargon or trivial technical elements.
The Supreme Court addressed this issue a decade ago with its landmark decision in a case called Alice v. CLS Bank, ruling that simply adding computer language to these otherwise generic patents isn’t enough to make them valid. However, Alice hasn’t fully protected us from patent trolls. Even with this decision, the cost of challenging a patent can run into hundreds of thousands of dollars, enabling patent trolls to make “nuisance” demands for amounts of $100,000 or less. But Alice has dampened the severity and frequency of patent troll claims, and allowed for many more businesses to fight back when needed.
So we weren’t surprised when some large patent owners tried again this year to overturn Alice, with the introduction of the Patent Eligibility Restoration Act (PERA), which would bring the worst patents back into the system. PERA would also have overturned the Supreme Court ruling that prevents the patenting of human genes. EFF opposed PERA at every stage, and late this year, its supporters abandoned their efforts to pass it through the 118th Congress. We know they will try again next year–we’ll be ready.
Shining Light On Secrecy In Patent LitigationLitigation in the U.S is supposed to be transparent, particularly in patent cases involving technologies that impact millions of internet users daily. Unfortunately, this is not always the case. In Entropic Communications LLC v. Charter Communications, filed in the U.S. District Court for the Eastern District of Texas, overbroad sealing of documents has obscured the case from public view. EFF intervened in the case to protect the public’s right to access federal court records, as the claims made by Entropic could have wide-reaching implications for anyone using cable modems to connect to the internet.
Our work to ensure transparency in patent disputes is ongoing. In 2016, EFF intervened in another overly-sealed patent case in the Eastern District of Texas. In 2022, we did the same in California, securing an important transparency ruling. That same year, we supported a judge’s investigation into patent owners in Delaware, which ultimately resulted in referrals for criminal investigation. The judge’s actions were upheld on appeal this year.
It remains far too easy for patent trolls to extort and exploit individuals and companies simply for creating or using software. In 2025, EFF will continue fighting for a patent system that’s open, fair, and transparent.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
We Stood Up for Access to the Law and Congress Listened: 2024 in Review
For a while, ever since they lost in court, a number of industry giants have pushed a bill that purported to be about increasing access to the law. In fact, it would give them enormous power over the public ability to access, share, teach, and comment on the law.
This sounds crazy—no one should be able to own the law. But these industry associations claim there’s a glaring exception to the rule: safety and building codes. The key distinction, they insist, is how these particular laws are developed. Often, when it comes to creating the best practices for an industry, a group of experts comes together to draft model standards. Many of those standards are then “incorporated by reference” into law, making them legal mandates just are surely as the U.S. tax code.
But unlike most U.S. laws, the industry association that convene the experts claim that they own a copyright in the results, which means they get to control – and charge for—access to them.
The consequences aren’t hard to imagine. If you are a journalist trying to figure out if a bridge that collapsed violated legal safety standards, you have to get the standards from the industry association, and pay for it. If you are renter who wants to know whether your apartment complies with the fire code, you face the same barrier. And so on.
Many organizations are working to remedy the situation, making standards available online for free (or, in some cases, for free but with a “premium” version that offers additional services on top). Courts around the country have affirmed their right to do so.
Which brings us to the “Protecting and Enhancing Public Access to Codes Act” or “Pro Codes.” The Act requires industry associations to make standards incorporated by reference into law available for free to the public. But here’s the kicker – in exchange Congress will affirm that they have a legitimate copyright in those laws.
This is bad deal for the public. First, access will mean read-only, and subject to licensing limits. We already know what that looks like: currently the associations that make their codes available to the public online do so through clunky, disorganized, siloed websites, largely inaccessible to the print-disabled, and subject to onerous contractual terms (like a requirement to give up your personal information). The public can’t copy, print, or even link to specific portions of the codes. In other words, you can look at the law (as long as you aren’t print-disabled and you know exactly what to look for), but you can’t share it, compare it, or comment on it. That’s fundamentally against the public interest, as many have said. It gives private parties a windfall to do badly what others, like EFF client Public Resources Online, already do better and for free.
Second, it’s solving a nonexistent problem. The many volunteers who develop these codes neither need nor want a copyright incentive. The industry associations don’t need it either—they make plenty of profit though trainings, membership fees, and selling standards that haven’t been incorporated into law.
Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share, and discuss the law.
We’re pleased that members of Congress have recognized the many problems with this law. Many of you wrote to your members to raise concerns and when it was brought to a vote in committee, members registered those concerns. While it passed out of the House Judiciary Committee, the House of Representatives was asked to vote on the law “on suspension,” meaning it can avoid debate and become law if two-thirds of the House vote yes on it. In theory, it’s meant to make it easier to pass uncontroversial laws.
Because you wrote in, because experts sent letters explaining the problems, enough members of Congress recognized that Pro Codes is not uncontroversial. It is not a small deal to allow industry giants to own parts of the law.
This year, we are glad that so many people lent their time and energy to understanding the wolf in sheep’s clothing that the Pro Codes Act really was. And we hope that SDOs take note that they cannot pull the wool over everyone’s eyes. Not while we’re keeping watch.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Related Cases: Freeing the Law with Public.Resource.Org
Police Surveillance in San Francisco: 2024 Year in Review
From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.
Unfortunately, 2024 was a far cry from those victories.
While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.
In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.
Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.
EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 Year in Review
EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance Hub, an updated and expanded version of which was released at the beginning of 2024.
Removing Amazon RingWe started off with a big change: the removal of our set of Amazon Ring relationships with local police. In January, Amazon announced that it would no longer facilitate warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly through access facilitated by Fusus and other platforms.
Adding third-party platformsIn 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools. Common examples include LexisNexis Accurint, Thomson Reuters Clear, and
New Fusus data404 Media released a report last January on the use of Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.
New and updated ALPR dataEFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests with departments around the state as part of our Data Driven project. This year, we were able to update all of our entries in California related to ALPR data.
In addition, we were able to add more than 300 new law enforcement agencies nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project from the Raleigh News & Observer.
Redoing drone dataThis year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020, so we reviewed and updated any entries that depended on that resource.
We also added new drone data from Illinois, Minnesota, and Texas.
We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed.
Other cool uses of the AtlasThe Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the data.
The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical Intervention.”
Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at Northwestern University, who used the data to make a cool visualization on surveillance technologies being used. At a January 2024 conference at the IT University of Copenhagen, Bjarke Friborg of the project Critical Understanding of Predictive Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple academic papers, including the Annual Review of Criminology, and is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment Vision.”
Thanks to our volunteers
The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.
The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency and community oversight to police use of technology. If you’re interested in joining us, get in touch.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 Year in Review
As we said last year, the U.S. Supreme Court has taken an unusually active interest in internet free speech issues over the past couple years.
All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online platforms. We posted some takeaways from these recent cases.
We additionally filed an amicus brief in a new case before the Supreme Court challenging the Texas age verification law.
Public Officials Censoring Comments on Government Social Media PagesCases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – DECIDED
The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions.
The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he spoke on social media.” As we explained, the court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.
Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in applying the new test. The court then issued an opinion in which it remanded the case back to the district court to allow the plaintiff to conduct additional factual development in light of the Supreme Court's new state action test. The Sixth Circuit also importantly held in relation to the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus brief.
Government Mandates for Platforms to Carry Certain Online SpeechCases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED
The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.
In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged.
Government Coercion in Social Media Content ModerationCase: Murthy v. Missouri – DECIDED
The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.
Unfortunately, the Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion.
However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.
Some Takeaways From These Three Sets of CasesThe O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a regulator and user.
Three key takeaways emerged from these three sets of cases (across five total cases):
First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.
Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond the idea that content moderation is largely passive and indifferent.
Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.
Government-Mandated Age VerificationCase: Free Speech Coalition v. Paxton – PENDING
Last but not least, we filed an amicus brief urging the Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It: 2024 in Review
People’s ability to speak online, share ideas, and advocate for change are enabled by the countless online services that host everyone’s views.
Despite the central role these online services play in our digital lives, lawmakers and courts spent the last year trying to undermine a key U.S. law, Section 230, that enables services to host our speech. EFF was there to fight back on behalf of all internet users.
Section 230 (47 U.S.C. § 230) is not an accident. Congress passed the law in 1996 because it recognized that for users’ speech to flourish online, services that hosted their speech needed to be protected from legal claims based on any particular user’s speech. The law embodies the principle that everyone, including the services themselves, should be responsible for their own speech, but not the speech of others. This critical but limited legal protection reflects a careful balance by Congress, which at the time recognized that promoting more user speech outweighed the harm caused by any individual’s unlawful speech.
EFF helps thwart effort to repeal Section 230Members of Congress introduced a bill in May this year that would have repealed Section 230 in 18 months, on the theory that the deadline would motivate lawmakers to come up with a different legal framework in the meantime. Yet the lawmakers behind the effort provided no concrete alternatives to Section 230, nor did they identify any specific parts of the law they believed needed to be changed. Instead, the lawmakers were motivated by their and the public’s justifiable dissatisfaction with the largest online services.
As we wrote at the time, repealing Section 230 would be a disaster for internet users and the small, niche online services that make up the diverse forums and communities that host speech about nearly every interest, religious and political persuasion, and topic. Section 230 protects bloggers, anyone who forwards an email, and anyone who reposts or otherwise recirculates the posts of other users. The law also protects moderators who remove or curate other users’ posts.
Moreover, repealing Section 230 would not have hurt the biggest online services, given that they have astronomical amounts of money and resources to handle the deluge of legal claims that would be filed. Instead, repealing Section 230 would have solidified the dominance of the largest online services. That’s why Facebook has long ran a campaign urging Congress to weaken Section 230 – a cynical effort to use the law to cement its dominance.
Thankfully, the bill did not advance, in part because internet users wrote to members of Congress objecting to the proposal. We hope lawmakers in 2025 put their energy toward ending Big Tech’s dominance by enacting a meaningful and comprehensive consumer data privacy law, or pass laws that enable greater interoperability and competition between social media services. Those efforts will go a long way toward ending Big Tech’s dominance without harming users’ online speech.
EFF stands up for users’ speech in courtsCongress was not the only government branch that sought to undermine Section 230 in the past year. Two different courts issued rulings this year that will jeopardize people’s ability to read other people’s posts and make use of basic features of online services that benefit all users.
In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit issued a deeply confused opinion, ruling that Section 230 does not apply to the automated system TikTok uses to recommend content to users. The court reasoned that because online services have a First Amendment right to decide how to present their users’ speech, TikTok’s decisions to recommend certain content reflects its own speech and thus Section 230’s protections do not apply.
We filed a friend-of-the-court brief in support of TikTok’s request for the full court to rehear the case, arguing that the decision was wrong on both the First Amendment and Section 230. We also pointed out how the ruling would have far-reaching implications for users’ online speech. The court unfortunately denied TikTok’s rehearing request, and we are waiting to see whether the service will ask the Supreme Court to review the case.
In Neville v. Snap, Inc., a California trial court refused to apply Section 230 in a lawsuit that claims basic features of the service, such as disappearing messages, “Stories,” and the ability to befriend mutual acquaintances, amounted to defectively designed products. The trial court’s ruling departs from a long line of other court decisions that ruled that these claims essentially try to plead around Section 230 by claiming that the features are the problem, rather than the illegal content that users created with a service’s features.
We filed a friend-of-the-court brief in support of Snap’s effort to get a California appellate court to overturn the trial court’s decision, arguing that the ruling threatens the ability for all internet users to rely on basic features of a given service. Because if a platform faces liability for a feature that some might misuse to cause harm, the platform is unlikely to offer that feature to users, despite the fact that the majority of people using the feature for legal and expressive purposes. Unfortunately, the appellate court denied Snap’s petition in December, meaning the case continues before the trial court.
EFF supports effort to empower users to customize their online experiencesWhile lawmakers and courts are often focused on Section 230’s protections for online services, relatively little attention has been paid to another provision in the law that protects those who make tools that allow users to customize their experiences online. Yet Congress included this protection precisely because it wanted to encourage the development of software that people can use to filter out certain content they’d rather not see or otherwise change how they interact with others online.
That is precisely the goal of a tool being developed by Ethan Zuckerman, a professor at the University of Massachusetts Amherst, known as Unfollow Everything 2.0. The browser extension would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.
Zuckerman filed a lawsuit against Facebook seeking a court ruling that Unfollow Everything 2.0 was immune from legal claims from Facebook under Section 230(c)(2)(B). EFF filed a friend-of-the-court brief in support, arguing that Section 230’s user-empowerment tool immunity is unique and incentivizes the development of beneficial tools for users, including traditional content filtering, tailoring content on social media to a user’s preferences, and blocking unwanted digital trackers to protect a user’s privacy.
The district court hearing the case unfortunately dismissed the case, but its ruling did not reach the merits of whether Section 230 protected Unfollow Everything 2.0. The court gave Zuckerman an opportunity to re-file the case, and we will continue to support his efforts to build user-empowering tools.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
EFF in the Press: 2024 in Review
EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world.
Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.
One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.
Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.
EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law.
But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more.
We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, IL; Nashville, TN; and Sacramento, CA, among other localities.
EFFers also spoke their minds directly in op-eds placed far and wide, including:
- Street Sheet, Feb. 15: “No on E: Endangering Accountability and Privacy” (Nash Sheard)
- 48 Hills, Feb. 27: “San Franciscans know a lot about tech. That’s why they should vote No on E” (Jason Kelley and Matthew Guariglia)
- AllAfrica, March 8: “Rihanna, FIFA, Guinness, Marvel, Nike - All Could Be Banned in Ghana” (Daly Barnett, Paige Collings, and Dave Maass)
- The Advocate, May 13: “Why I'm protecting privacy in our connected world” (Erica Portnoy)
- Teen Vogue, June 19: “The Section 230 Sunset Act Would Cut Off Young People’s Access to Online Communities” (Jason Kelley)
- UOL, Aug. 5: “ONU pode fechar pacto global de vigilância arbitrária; o que fará o Brasil?” (Veridiana Alimonti and Michel Roberto de Souza)
- Byline Times, Aug. 16, “Keir Starmer Wants Police to Expand Use of Facial Recognition Technology Across UK – He Should Ban it Altogether” (Paige Collings)
- Slate, Aug. 22, “Expanded Police Surveillance Will Get Us ‘Broken Windows’ on Steroids” (Matthew Guariglia)
- Just Security, Aug. 27: “The UN Cybercrime Convention: Analyzing the Risks to Human Rights and Global Privacy” (Katitza Rodriguez)
- Context, Sept. 17: “X ban in Brazil: Disdainful defiance meets tough enforcement” (Veridiana Alimonti)
- AZ Central/ Arizona Republic, Sept. 19: “Police drones could silently video your backyard. That's a problem” (Hannah Zhao)
- Salon, Oct. 3: “Congress knew banning TikTok was a First Amendment problem. It did so anyway” (Brendan Gilligan)
- Deseret News, Nov. 30: “Opinion: Students’ tech skills should be nurtured, not punished” (Bill Budington and Alexis Hancock)
And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including:
- National Constitution Center’s We the People, Jan. 25: “Unpacking the Supreme Court’s Tech Term” (David Greene)
- What the Hack? with Adam Levin, Feb. 6: “EFF’s Eva Galperin Is Not the Pope of Fighting Stalkerware (But She Is)”
- WSJ’s The Future of Everything, Feb. 9: “How Face Scans and Fingerprints Could Become Your Work Badge” (Hayley Tsukayama)
- Fighting Dark Patterns, Feb. 14: “Dark Patterns and Digital Freedom Today. A conversation with Cindy Cohn.”
- 2600’s Off the Hook, Feb. 21: episode on Appin’s efforts to intimidate journalists and media outlets from reporting on the company’s alleged hacking history (David Greene and Cooper Quintin)
- CISO Series’ Defense in Depth, Feb. 22: “When Is Data an Asset and When Is It a Liability?” (F. Mario Trujillo)
- KCRW’s Scheer Intelligence, March 15: “The banning of TikTok is an attack on the free market” (David Greene)
- Inside Job Boards and Recruitment Marketplaces, March 22: “Is Glassdoor now violating user privacy and anonymity?” (Aaron Mackey)
- Firewalls Don’t Stop Dragons, April 15: “Protecting Kids Online” (Joe Mullin)
- Future Nonprofit, May 7: “Empowerment in Action: Nash Sheard - Building a Strong Bond for Change and Collaboration”
- Mindplex Podcast, May 17: “Is the TikTok Ban Unconstitutional?” (David Greene)
- Bioneers: Revolution From the Heart of Nature, Aug. 8: “None of Your Business: Claiming Our Digital Privacy Rights, Reclaiming Democracy” (Cindy Cohn)
- m/Oppenheim Nonprofit Report, Aug. 27: “Digital Privacy with Electronic Frontier Foundation” (Cindy Cohn)
- malwarebytes' Lock and Code, Sept. 9: “What the arrest of Telegram's CEO means, with Eva Galperin”
- Financial Times’ Tech Tonic, Sept. 9: “The Telegram case: Privacy vs security” (Eva Galperin)
- Command Prompt's More Than A Refresh, Sept. 10: “Cooper Quintin, Senior Staff Technologist @ The EFF”
- Mindplex Podcast, Sept. 16: “Pavel Durov's Arrest & Telegram's Encryption Issues” (David Greene)
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
Defending Encryption in the U.S. and Abroad: 2024 in Review
EFF supporters get that strong encryption is tied to one of our most basic rights: the right to have a private conversation. In the digital world, privacy is impossible without strong encryption.
That’s why we’ve always got an eye out for attacks on encryption. This year, we pushed back—successfully—against anti-encryption laws proposed in the U.S., the U.K. and the E.U. And we had a stark reminder of just how dangerous backdoor access to our communications can be.
U.S. Bills Pushing Mass File-Scanning Fail To AdvanceThe U.S. Senate’s EARN IT Bill is a wrongheaded proposal that would push companies away from using encryption and towards scanning our messages and photos. There’s no reason to enact such a proposal, which technical experts agree would turn our phones into bugs in our pockets.
We were disappointed when EARN IT was voted out of committee last year, even though several senators did make clear they wanted to see additional changes before they support the bill. Since then, however, the bill has gone nowhere. That’s because so many people, including more than 100,000 EFF supporters, have voiced their opposition.
People increasingly understand that encryption is vital to our security and privacy. And when politicians demand that tech companies install dangerous scanning software whether users like it or not, it’s clear to us all that they are attacking encryption, no matter how much obfuscation takes place.
EFF has long encouraged companies to adopt policies that support encryption, privacy and security by default. When companies do the right thing, EFF supporters will side with them. EFF and other privacy advocates pushed Meta for years to make end-to-end encryption the default option in Messenger. When Meta implemented the change, they were sued by Nevada’s Attorney General. EFF filed a brief in that case arguing that Meta should not be forced to make its systems less secure.
UK Backs Off Encryption-Breaking LanguageIn the U.K., we fought against the wrongheaded Online Safety Act, which included language that would have let the U.K. government strongarm companies away from using encryption. After pressure from EFF supporters and others, the U.K. government gave last-minute assurances that the bill wouldn’t be applied to encrypted messages. The U.K. agency in charge of implementing the Online Safety Act, Ofcom, has now said that the Act will not apply to end-to-end encrypted messages. That’s an important distinction, and we have urged Ofcom to make that even more clear in its written guidance.
EU Residents Do Not Want “Chat Control”Some E.U. politicians have sought to advance a message-scanning bill that was even more extreme than the U.S. anti-encryption bills. We’re glad to say the EU proposal, which has been dubbed “Chat Control” by its opponents, has also been stalled because of strong opposition.
Even though the European Parliament last year adopted a compromise proposal that would protect our rights to encrypted communications, a few key member states at the EU Council spent much of 2024 pushing forward the old, privacy-smashing version of Chat Control. But they haven’t advanced. In a public hearing earlier this month, 10 EU member states, including Germany and Poland, made clear they would not vote for this proposal.
Courts in the E.U., like the public at large, increasingly recognize that online private communications are human rights, and the encryption required to facilitate them cannot be grabbed away. The European Court of Human Rights recognized this in a milestone judgment earlier this year, Podchasov v. Russia, which specifically held that weakening encryption put at risk the human rights of all internet users.
A Powerful Reminder on BackdoorsAll three of the above proposals are based on a flawed idea: that it’s possible to give some form of special access to peoples’ private data that will never be exploited by a bad actor. But that’s never been true–there is no backdoor that works only for the “good guys.”
In October, the U.S. public learned about a major breach of telecom systems stemming from Salt Typhoon, a sophisticated Chinese-government backed hacking group. This hack infiltrated the same systems that major ISPs like Verizon, AT&T and Lumen Technologies had set up for U.S. law enforcement and intelligence agencies to get “lawful access” to user data. It’s still unknown how extensive the damage is from this hack, which included people under surveillance by U.S. agencies but went far beyond that.
If there’s any upside to a terrible breach like Salt Typhoon, it’s that it is waking up some officials to understand that encryption is vital to both individual and national security. Earlier this month, a top U.S. cybersecurity chief said “encryption is your friend,” making a welcome break with the messaging we’ve seen over the years at EFF. Unfortunately, other agencies, including the FBI, continue to push the idea that strong encryption can be coupled with easy access by law enforcement.
Whatever happens, EFF will continue to stand up for our right to use encryption to have secure and private online communications.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.
2024 Year in Review
It is our end-of-year tradition at EFF to look back at the last 12 months of digital rights. This year, the number and diversity of our reflections attest that 2024 was a big year.
If there is something uniting all the disparate threads of work EFF has done this year, it is this: that law and policy should be careful, precise, practical, and technologically neutral. We do not care if a cop is using a glass pressed against your door or the most advanced microphone: they need a warrant.
For example, much of the public discourse this year was taken up by generative AI. It seemed that this issue was a Rorschach test for everyone’s anxieties about technology - be they privacy, replacement of workers, surveillance, or intellectual property. Ultimately, it matters little what the specific technology is: whenever technology is being used against our rights, EFF will oppose that use. It’s a future-proof way of protecting us. If we have privacy protections, labor protections, and protections against government invasions, then it does not matter what technology takes over the public imagination, we will have recourse against its harms.
But AI was only one of the issues we took on this past year. We’ve worked on ensuring that the EU’s new rules regarding large online platforms respect human rights. We’ve filed countless briefs in support of free expression online and represented plaintiffs in cases where bad actors have sought to silence them, including citizen journalists who were targeted for posting clips of city council meetings online.
With your help, we have let the United States Congress know that its citizens are for protecting the free press and against laws that would cut kids off from vital sources of information. We’ve spoken to legislators, reporters, and the public to make sure everyone is informed about the benefits and dangers of new technologies, new proposed laws, and legal precedent.
Even all of that does not capture everything we did this year. And we did not—indeed, we cannot—do it without you. Your support keeps the lights on and ensures we are not speaking just for EFF as an organization but for our thousands of tireless members. Thank you, as always.
We will update this page with new stories about digital rights in 2024 every day between now and the new year.
Defending Encryption in the U.S. and Abroad
EFF in the Press
EFF Tells Appeals Court To Keep Copyright’s Fair Use Rules Broad And Flexible
It’s critical that copyright be balanced with limitations that support users’ rights, and perhaps no limitation is more important than fair use. Critics, humorists, artists, and activists all must have rights to re-use and re-purpose source material, even when it’s copyrighted.
Yesterday, EFF weighed in on another case that could shape the future of our fair use rights. In Sedlik v. Von Drachenberg, a Los Angeles tattoo artist created a tattoo based on a well-known photograph of Miles Davis taken by photographer Jeffrey Sedlik. A jury found that Von Drachenberg, the tattoo artist, did not infringe the photographer’s copyright because her version was different from the photo; it didn’t meet the legal threshold of “substantially similar.” After the trial, the judge in the case considered other arguments brought by Sedlik after the trial and upheld the jury’s findings.
On appeal, Sedlik has made arguments that, if upheld, could narrow fair use rights for everyone. The appeal brief suggests that only secondary users who make “targeted” use of a copyrighted work have strong fair use defenses, relying on an incorrect reading of the Supreme Court’s decision in Andy Warhol Foundation v. Goldsmith.
Fair users select among various alternatives, for both aesthetic and practical reasons.
Such a reading would upend decades of Supreme Court precedent that makes it clear that “targeted” fair uses don’t get any special treatment as opposed to “untargeted” uses. As made clear in Warhol, the copying done by fair users must simply be “reasonably necessary” to achieve a new purpose. The principle of protecting new artistic expressions and new innovations is what led the Supreme Court to protect video cassette recording as fair use in 1984. It also contributed to the 2021 decision in Oracle v. Google, which held that Google’s copying of computer programming conventions created for desktop computers, in order to make it easier to design for modern smartphones, was a type of fair use.
Sedlik argues that if a secondary user could have chosen another work, this means they did not “target” the original work, and thus the user should have a lessened fair use case. But that has never been the rule. As the Supreme Court explained, Warhol could have created art about a product other than Campbell’s Soup; but his choice to copy the famous Campbell’s logo was fully justified because it was “well known to the public, designed to be reproduced, and a symbol of an everyday item for mass consumption.”
Fair users always select among various alternatives, for both aesthetic and practical reasons. A film professor might know of several films that expertly demonstrate a technique, but will inevitably choose just one to show in class. A news program alerting viewers to developing events may have access to many recordings of the event from different sources, but will choose just one, or a few, based on editorial judgments. Software developers must make decisions about which existing software to analyze or to interoperate with in order to build on existing technology.
The idea of penalizing these non-“targeted” fair uses would lead to absurd results, and we urge the 9th Circuit to reject this argument.
Finally, Sedlik also argues that the tattoo artist’s social media posts are necessarily “commercial” acts, which would push the tattoo art further away from fair use. Artists’ use of social media to document their processes and work has become ubiquitous, and such an expansive view of commerciality would render the concept meaningless. That’s why multiple appellate courts have already rejected such a view; the 9th Circuit should do so as well.
In order for innovation and free expression to flourish in the digital age, fair use must remain a flexible rule that allows for diverse purposes and uses.
Further Reading:
- EFF Amicus Brief in Sedlik v. Von Drachenberg