EFF: Updates
No, the UK’s Online Safety Act Doesn’t Make Children Safer Online
Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA).
The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including social media, search engines, music sites, and adult content providers—enforce age checks to prevent children from seeing harmful content. This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:
- Primary priority content that is harmful to children:
- Pornographic content.
- Content which encourages, promotes or provides instructions for:
- suicide;
- self-harm; or
- an eating disorder or behaviours associated with an eating disorder.
- Priority content that is harmful to children:
- Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
- Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
- Content that encourages, promotes or provides instructions for serious violence against a person;
- Bullying content;
- Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional);
- Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and
- Content that encourages the self-administration of harmful substances.
- Non-designated content that is harmful to children (NDC):
- Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
- the content’s potential financial impact;
- the safety or quality of goods featured in the content; or
- the way in which a service featured in the content may be performed.
- Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:
- Robust age checks: Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.”
To do this, all users on sites that host this content must verify their age, for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18. - Safer algorithms: Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”
- Effective moderation: All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.”
Since these measures took effect in late July, social media platforms Reddit, Bluesky, Discord, and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab.
The UK Must Do BetterThe UK is not alone in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users’ ages; the European Commission is pushing forward with plans to test its age-verification app; and Australia’s ban on youth under the age of 16 accessing social media is likely to be implemented in December.
But the UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.
And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures.
The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.
TechEd Collab: Building Community in Arizona Around Tech Awareness
Earlier this year, EFF welcomed Technology Education Collaborative (TEC) into the Electronic Frontier Alliance (EFA). TEC empowers everyday people to become informed users of today's extraordinary technology, and helps people better understand the tech that surrounds them on a daily basis. TEC does this by hosting in-person, hands-on events, including right to repair workshops, privacy meetups, tech field trips, and demos. We got the chance to catch up with Connor Johnson, Chief Technology Officer of TEC, and speak with him about the work TEC is doing in the Greater Phoenix area:
Connor, tell us how Technology Education Collaborative got started, and about its mission.
TEC was started with the idea of creating a space where industry professionals, students, and the community at large could learn about technology together. We teamed up with Gateway Community College to build the Advanced Cyber Systems Lab. A lot of tech groups in Phoenix meet at varying locations, because they can’t afford or find a dedicated space. TEC hosts community technology-focused groups at the Advanced Cyber Systems Lab, so they can have the proper equipment to work on and collaborate on their projects.
Speaking of projects, let's talk about some of the main priorities of TEC: right to repair, privacy, and cybersecurity. Having the only right to repair hub in the greater Phoenix metro valley, what concerns do you see on the horizon?
One of our big concerns is that many companies have slowly shifted away from repairability to a sense of convenience. We are thankful for the donations from iFixIt that allow people to use the tools they may otherwise not know they need or could afford. Community members and IT professionals have come to use our anti-static benches to fix everything from TVs to 3D printers. We are also starting to host ‘Hardware Happy Hour’ so anyone can bring their hardware projects in and socialize with like-minded people.
How’s your privacy and cybersecurity work resonating with the community?
We have had a host of different speakers discuss the current state of privacy and how it can affect different individuals. It was also wonderful to have your Surveillance Litigation Director, Andrew Crocker, speak at our July edition of Privacy PIE. So many of the attendees were thrilled to be able to ask him questions and get clarification on current issues. Christina, CEO of TEC, has done a great job leading our Privacy PIE events and discussing the legal situation surrounding many privacy rights people take for granted. One of my favorite presentations was when we discussed privacy concerns with modern cars, where she touched on aspects like how the cameras are tied to car companies' systems and data collection.
TEC’s current goal is to focus on building a community that is not just limited to cybersecurity itself. One problem that we’ve noticed is that there are a lot of groups focused on security but don’t branch out into other fields in tech. Security affects all aspects of technology, which is why TEC has been branching out its efforts to other fields within tech like hardware and programming. A deeper understanding of the fundamentals can help us to build better systems from the ground up, rather than applying cybersecurity as an afterthought.
In the field of cybersecurity, we have been working on a project building a small business network. The idea behind this initiative is to allow small businesses to independently set up their network, so that provides a good layer of security. Many shops don’t either have the money to afford a security-hardened network or don’t have the technical know-how to set one up. We hope this open-source project will allow people to set up the network themselves, and allow students a way to gain valuable work experience.
It’s awesome to hear of all the great things TEC is doing in Phoenix! How can people plug in and get engaged and involved?
TEC can always benefit from more volunteers or donations. Our goal is to build community, and we are happy to have anyone join us. All are welcome to the Advanced Cyber System lab at Gateway Community College – Washington Campus Monday through Thursday 4 pm to 8 pm. Our website is www.techedcollab.org and on facebook we’re: www.facebook.com/techedcollab People can also join our discord server for some great discussions and updates on our upcoming events!
👮 Amazon Ring Is Back in the Mass Surveillance Game | EFFector 37.9
EFF is gearing up to beat the heat in Las Vegas for the summer security conferences! Before we make our journey to the Strip, we figured let's get y'all up-to-speed with a new edition of EFFector.
This time we're covering an illegal mass surveillance scheme by the Sacramento Municipal Utility District, calling out dating apps for using intimate data—like sexual preferences or identity—to train AI , and explaining why we're backing the Wikimedia Foundation in their challenge to the UK’s Online Safety Act.
Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Policy Analyst Matthew Guariglia explains how Amazon Ring is cashing in on the rising tide of techno-authoritarianism. Listen now on YouTube or the Internet Archive.
EFFECTOR 37.9 - Amazon Ring Is Back in the Mass Surveillance Game
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.
Podcast Episode: Smashing the Tech Oligarchy
Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more.
%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fe4b50178-f872-4b2c-9015-cec3a88bc5de%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info.
This embed will serve content from simplecast.com
(You can also find this episode on the Internet Archive and on YouTube.)
Kara Swisher has been documenting the internet’s titans for almost 30 years through a variety of media outlets and podcasts. She believes that with adequate regulation we can keep people safe online without stifling innovation, and we can have an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs.
In this episode you’ll learn about:
- Why it’s so important that tech workers speak out about issues they want to improve and work to create companies that elevate best practices
- Why completely unconstrained capitalism turns technology into weapons instead of tools
- How antitrust legislation and enforcement can create a healthier online ecosystem
- Why AI could either bring abundance for many or make the very rich even richer
- The small online media outlets still doing groundbreaking independent reporting that challenges the tech oligarchy
Kara Swisher is one of the world's foremost tech journalists and critics, and currently hosts two podcasts: On with Kara Swisher and Pivot, the latter co-hosted by New York University Professor Scott Galloway. She's been covering the tech industry since the 1990s for outlets including the Washington Post, the Wall Street Journal, and the New York Times; she is an New York Magazine editor-at-large, a CNN contributor, and cofounder of the tech news sites Recode and All Things Digital. She also has authored several books, including “Burn Book” (Simon & Schuster, 2024) in which she documents the history of Silicon Valley and the tech billionaires who run it.
Resources:
- New York Times: “How Kara Swisher Scaled Even Higher” (May 19, 2025)
- On with Kara Swisher: “AMD CEO Lisa Su on AI Chips, Trump's Tariffs and the Magic of Open Source” (May 1, 2025)
- WIRED’s coverage of the Department of Government Efficiency (DOGE)
- EFF: Competition
- University of California, Berkeley Haas School of Business: Dean's Speaker Series | Kara Swisher | Tech Journalist (Oct. 1, 2024)
What do you think of “How to Fix the Internet?” Share your feedback here.
TranscriptKARA SWISHER: It's a tech that's not controlled by a small group of homogeneous people. I think that's pretty much it. I mean, and there's adequate regulation to allow for people to be safe and at the same time, not too much in order to be innovative and do things – you don't want the government deciding everything.
It's a place where the internet, which was started by US taxpayers, which was paid for, is beneficial for people, and that there's transparency in it, and that we can see what's happening and what's doing. And again, the concentration of power in the hands of a few people really is at the center of the problem.
CINDY COHN: That's Kara Swisher, describing the balance she'd like to see in a better digital future. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation
JASON KELLEY: And I'm Jason Kelley -- EFF's Activism Director. You're listening to How to Fix the Internet.
CINDY COHN: This show is about envisioning a better digital future that we can all work towards.
JASON KELLEY: And we are excited to have a guest who has been outspoken in talking about how we get there, pointing out the good, the bad and the ugly sides of the tech world.
CINDY COHN: Kara Swisher is one of the world's foremost tech journalists and critics. She's been covering the industry since the 1990s, and she currently hosts two podcasts: On with Kara Swisher and Pivot, and she's written several books, including last year's Burn Book where she documents the history of Silicon Valley and the tech billionaires who run it.
We are delighted that she's here. Welcome, Kara.
KARA SWISHER: Thank you.
CINDY COHN: We've had a couple of tech critics on the podcast recently, and one of the kind of themes that's come up for us is you kind of have to love the internet before you can hate on it. And I've heard you describe your journey that way as well. And I'd love for you to talk a little bit about it, because you didn't start off, really, looking for all the ways that things have gone wrong.
KARA SWISHER: I don't hate it. I don't. It's just, you know, I have eyes and I can see, you know, I mean, uh, one of the expressions I always use is you should, um, believe what you see, not see what you believe. And so I always just, that's what's happening. You can see it happening. You can see the coarsening of our dialogue now offline being affected by online. You could just see what's happened.
But I still love the the possibilities of technology and the promise of it. And I think that's what attracted me to it in the first place, and it's a question of how you use it as a tool or a weapon. And so I always look at it as a tool and some people have taken a lot of these technologies and use them as a weapon.
CINDY COHN: So what was that moment? Did you, do you have a moment when you decided you were really interested in tech and that you really found it to be important and worth devoting your time to?
KARA SWISHER: I was always interested in it because I had studied propaganda and the uses of TV and radio and stuff. So I was always interested in media, and this was the media on steroids. And so I recall downloading an entire book onto my computer and I thought, oh, look at this. Everything is digital. And so the premise that I came to at the time, or the idea I came to was that everything that can be digitized would be digitized, and that was a huge idea because that means entire industries would change.
CINDY COHN: Yeah.
JASON KELLEY: Kara, you started by talking about this concentration of power, which is obvious to anyone who's been paying attention, and at the same time, you know, we did use to have tech leaders who, I think, they had less power. It was less concentrated, but also people were more focused, I think, on solving real problems.
You know, you talk a lot about Steve Jobs. There was a goal of improving people's lives with technology, that that didn't necessarily it, it helped the bottom line, but the focus wasn't just on quarterly profits. And I wonder if you can talk a little bit about what you think it would look like if we returned to that in some way. Is that gone?
KARA SWISHER: I don't think we were there. I think they were always focused on quarterly profits. I think that was a canard. I wrote about it, that they would pretend that they were here to help. You know, it's sort of like the Twilight Zone episode To Serve Man. It's a cookbook. I always thought it was a cookbook for these people.
And they were always formulated in terms of making money and maximizing value for their shareholders, which was usually themselves. I wasn't stupid. I understood what they were doing, especially when these stocks went to the moon, especially the early internet days and their first boom. And they became instant, instant-airs, I think they were called that, which was instant millionaires and, and then now beyond that.
And so I was always aware of the money, even if they pretended they weren't, they were absolutely aware And so I don't have a romantic version of this at the beginning, um, except among a small group of people, you know, who, who, who were seeing it, like the Whole Earth Catalog and things like that, which we're looking at it as a way to bring everybody together or to spread knowledge throughout the world, which I also believed in too.
JASON KELLEY: Do you think any of those people are still around?
KARA SWISHER: No, they’re dead.
JASON KELLEY: I mean, literally, you know, they're literally dead, but are there any heirs of theirs?
KARA SWISHER: No, I mean, I don't think they had any power. I don't, I think that some of the theoretical stuff was about that, but no, they didn't have any power. The people that had power were the, the Mark Zuckerbergs, the Googles, and even, you know, the Microsofts, I mean, Bill Gates is kind of the exemplification of all that. As he, he took other people's ideas and he made it into an incredibly powerful company and everybody else sort of followed suit.
JASON KELLEY: And so mostly for you, the concentration of power is the biggest shift that's happened and you see regulation or, you know, anti-competitive moves as ways to get us back.
KARA SWISHER: We don't have any, like, if we had any laws, that would be great, but we don't have any that, that constrain them. And now under President Trump, there's not gonna be any rules around AI, probably. There aren't gonna be any rules around any significant rules, at least around any of it.
So they, the first period, which was the growth of where we are now, was not constrained in any way, and now it's not just not constrained, but it's helping whether it's cryptocurrency or things like that. And so I don't feel like there's any restrictions, like at this point, in fact, there's encouragement by government to do whatever you want.
CINDY COHN: I think that's a really big worry. And you know, I think you're aware, as are we, that, you know, just because somebody comes in and says they're gonna do something about a problem with legislation doesn't mean that they're, they're actually having that. And I think sometimes we feel like we sit in this space where we're like, we agree with you on the harm, but this thing you wanna do is a terrible idea and trying to get the means and the ends connected is kind of a lot of where we live sometimes, and I think you've seen that as well, that like once you've articulated the harm, that's kind of the start of the journey about whether the thing that you're talking about doing will actually meet that moment.
KARA SWISHER: Absolutely. The harms, they don't care about, that's the issue. And I think I was always cognizant of the harms, and that can make you seem like, you know, a killjoy of some sort. But it's not, it's just saying, wow, if you're gonna do this social media, you better pay attention to this or that.
They acted like the regular problems that people had didn't exist in the world, like racism, you know, sexism. They said, oh, that can be fixed, and they never offered any solutions, and then they created tools that made it worse.
CINDY COHN: I feel like the people who thought that we could really use technology to build a better world, I, I don't think they were wrong or naive. I just think they got stomped on by the money. Um, and, you know, uh.
KARA SWISHER: Which inevitably happens.
CINDY COHN: It does. And the question is, how do you squeeze out something, you know, given that this is the dynamic of capitalism, how do you squeeze out space for protecting people?
And we've had times in our society when we've done that better, and we've done that worse. And I feel like there are ways in which this is as bad as has gotten in my lifetime. You know, with the government actually coming in really strongly on the side of, empowering the powerful and disempowering the disempowered.
I see competition as a way to do this. EFF was, you know, it was primarily an organization focused on free speech and privacy, but we kind of backed into talking about competition 'cause we felt like we couldn't get at any of those problems unless we talked about the elephant in the room.
And I think you think about it, really on the individual, you know, you know all these guys, and on that very individual level of what, what kinds of things will, um, impact them.
And I'm wondering if you have some thoughts about the kinds of rules or regulations that might actually, you know, have an impact and not, not turn into, you know, yet another cudgel that they get to wield.
KARA SWISHER: Well any, any would be good. Like I don't, I don't, there isn't any, there isn't any you could speak of that's really problematic for them, except for the courts which are suing over antitrust issues or some regulatory agencies. But in general, what they've done is created an easy glide path for themselves.
I mean, we don't have a national privacy regulation. We don't have algorithmic transparency bills. We don't have data protection really, and to speak of for people. We don't have, you know, transparency into the data they collect. You know, we have more rules and laws on airplanes and cigarettes and everybody else, but we don't have any here. So you know, antitrust is a whole nother area of, of changing, of our antitrust rules. So these are all areas that have to be looked at. But we haven't, they haven't, they haven't passed a thing. I mean, lots of legislators have tried, but, um, it hasn't worked really.
CINDY COHN: You know, a lot of our supporters are people who work in tech but aren't necessarily the. You know, the tech giants, they're not the tops of these companies, but they work in the companies.
And one of the things that I, you know, I don't know if you have any insights if you've thought about this, but we speak with them a lot and they're dismayed at what's going on, but they kind of feel powerless. And I'm wondering if you have thoughts like, you know, speaking to the people who aren't, who aren't the Elons and the, the guys at the top, but who are there, and who I think are critical to keeping these companies going. Are there ways that they can make their voices heard that you've thought of that would, that might work? I guess I, I'm, I'm pulling on your insight because you know the actual people.
KARA SWISHER: Yeah, you know, speak out. Just speak out. You know, everybody gets a voice these days and there's all kinds of voices that never would've gotten heard and to, you know, talk to legislators, involve customers, um, create businesses where you do those good practices. Like that's the best way to do it is create wealth and capitalism and then use best practices there. That to me is the best way to do that.
CINDY COHN: Are there any companies that you look at from where you sit that you think are doing a pretty good job or at least trying? I don't know if you wanna call anybody out, but, um, you know, we see a few, um, and I kind of feel like all the air gets sucked out of the room.
KARA SWISHER: In bits and pieces. In bits and pieces, you know, Apple's good on the privacy thing, but then it's bad on a bunch of other things. Like you could, like, you, you, the problem is, you know, these are shareholder driven companies and so they're gonna do what's best for them and they could, uh, you know, wave over to privacy or wave over to, you know, more diversity, but they really are interested in making money.
And so I think the difficulty is figuring out, you know, do they have duties as citizens or do they just have duties as corporate citizens? And so that's always been a difficult thing in our society and will continue to be.
CINDY COHN: Yeah.
JASON KELLEY: We've always at EFF really stood up for the user in, in this way where sometimes we're praising a company that normally people are upset with because they did a good thing, right? Apple is good on privacy. When they do good privacy things we say, that's great. You know, and if Apple makes mistakes, we say that too.
And it feels like, um, you know, we're in the middle of, I guess, a “tech lash.” I don't know when it started. I don't know if it'll ever end. I don't know if there's, if that's even a real term in terms of like, you know, tech journalism. But do you find that it's difficult? Two, get people to accept sort of like any positive praise for companies that are often just at this point, completely easy to ridicule for all the mistakes they've made.
KARA SWISHER: I think the tech journalism has gotten really strong. It's gotten, I mean, just look at the DOGE coverage. I think it really, I'll point to WIRED as a good example, as they've done astonishing stuff. I think a lot of people have done a lot on, on, uh, you know, the abuses of social media. I think they've covered a lot of issues from the overuse of technology to, you know, all the crypto stuff. It doesn't mean people follow along, but they've certainly been there and revealed a lot of the flaws there. Um, while also covering it as like, this is what's happening with ai. Like this is what's happening, here's where it's going. And so you have to cover as a thing. Like, this is what's being developed. but then there's, uh, others, you know, who have to look into the real problems.
JASON KELLEY: I get a lot of news from 404 Media, right?
KARA SWISHER: Yeah, they’re great.
JASON KELLEY: That sort of model is relatively new and it sort of sits against some of these legacy models. Do you see, like, a growing role for things like that in a future?
KARA SWISHER: There's lots of different things. I mean, I came from like, as you mean, part of the time, although I got away from it pretty quickly, but some of 'em are doing great. It just depends on the story, right? Some of the stories are great, like. Uh, you know, uh, there's a ton of people at the Times have done great stuff on, on, on lots of things around kids and abuses and social media.
At the same time, there's all these really exciting young, not necessarily young, actually, um, independent media companies, whether it's Casey Newton, at Platformer, or Eric Newcomer covering VCs, or 404. There's all these really interesting new stuff. That's doing really well. WIRED is another one that's really seen a lot of bounce back under its current editor who just came on relatively recently.
So it just depends. It depends on where it is, but there's, Verge does a great job. But I think it's individually the stories in, there's no like big name in this area. There's just a lot of people and then there's all these really interesting experts or people who work in tech who've written a lot. That is always very interesting too, to me. It's interesting to hear from insiders what they think is happening.
CINDY COHN: Well, I'm happy to hear this, this optimism. 'Cause I worry a lot about, you know, the way that the business model for media has really been hollowed out. And then seeing things like, you know, uh, some of the big broadcast news people folding,
KARA SWISHER: Yeah, but broadcast never did journalism for tech, come on. Like, some did, I mean, one or two, but it wasn't them who was doing it. It was usually, you know, either the New York Times or these smaller institutions have been doing a great job. There's just been tons and tons of different things, completely different things.
JASON KELLEY: What do you think about the fear, maybe I'm, I'm misplacing it, maybe it's not as real as I imagine it is. Um, that results from something like a Gawker situation, right. You know, you have wealthy people.
KARA SWISHER: That was a long time ago.
JASON KELLEY: It was, but it, you know, a precedent was sort of set, right? I mean, do you think people in working in tech journalism can take aim at, you know, individual people that have a lot of power and wealth in, in the same way that they could before?
KARA SWISHER: Yeah. I think they can, if they're accurate. Yeah, absolutely.
CINDY COHN: Yeah, I think you're a good exhibit A for that, you pull no punches and things are okay. I mean, we get asked sometimes, um, you know, are, are you ever under attack because of your, your sharp advocacy? And I kind of think your sharp advocacy protects you as long as you're right. And I think of you as somebody who's also in, in a bit of that position.
KARA SWISHER: Mmhm.
CINDY COHN: You may say this is inevitable, but I I wanted to ask you, you know, I feel like when I talk with young technical people, um, they've kind of been poisoned by this idea that the only way you can be successful is, is if you're an asshole.
That there's no, there's no model, um, that, that just just goes to the deal. So if they want to be successful, they have to be just an awful person. And so even if they might have thought differently beforehand, that's what they think they have to do. And I'm wondering if you run into this as well, and I sometimes find myself trying to think about, you know, alternate role models for technical people and if you have any that you think of.
KARA SWISHER: Alternate role models? It's mostly men. But there are, there's all kinds of, like, I just did an interview with Lisa Su, who's head of AMD, one of the few women CEOs. And in AI, there's a number of women, uh, you know, you don't necessarily have to have diversity to make it better, but it sure helps, right? Because people have a different, not just diversity of gender or diversity of race, but diversity of backgrounds, politics. You know, the more diverse you are, the better products you make, essentially. That's my always been my feeling.
Look, most of these companies are the same as it ever was, and in fact, there's fewer different people running them, essentially. Um, but you know, that's always been the nature of, of tech essentially, that it was sort of a, a man's world.
CINDY COHN: Yeah, I see that as well. I just worry that young people or junior people coming up think that the only way that you can be successful is a, if you look like the guys who are already successful, but also, you know, if you're just kind of not, you know, if you're weird and not nice.
KARA SWISHER: It's just depends on the person. It's just that when you get that wealthy, you have a lot of people licking you up and down all day, and so you end up in the crazy zone like Elon Musk, or the arrogant zone like Mark Zuckerberg or whatever. It's just they don't get a lot of pushback and when you don't get a lot of friction, you tend to think everything you do is correct.
JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights. And EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 10th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast. Have a listen to this: [WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Kara Swisher.
CINDY COHN: I mean, you watched all these tech giants kind of move over to the Trump side and then, you know, stand there on the inauguration. It sounds like you thought that might've been inevitable.
KARA SWISHER: I said it was inevitable, they were all surprised. They're always surprised when I'm like, Elon's gonna crack up with the president. Oh look, they cracked up with, it's not hard to follow these people. In his case, he's, he's personally, there's something wrong with his head, obviously. He always cracks up with people. So that's what happened here.
In that case, they just wanted things. They want things. You think they liked Donald Trump? You’re wrong there? I'll tell you. They don't like him. They need him. They wanna use him and they were irritated by Biden 'cause he presumed to push back on and he didn't do a very good job of it, honestly. But they definitely want things.
CINDY COHN: I think the tech industry came up at a time when deregulation was all the rage, right? So in some ways they were kind of born into a world where regulation was an anathema and they took full advantage of the situation.
As did lots of other areas that got deregulated or were not regulated in the first place. But I think tech, because of timing in some ways, tech was really born into this zone. And there was some good things for it too. I mean, you know, EFF was, was successful in the nineties at making sure that the internet got first Amendment protection, that we didn't, go to the other side with things like the Communications Decency Act and squelch any adult material from being put online and reduce everything to the side. But getting that right and kind of walking through the middle ground where you have regulation that supports people but doesn't squelch them is just an ongoing struggle,
KARA SWISHER: Mm-hmm. Absolutely.
JASON KELLEY: I have this optimistic hope that these companies and their owners sort of crumble as they continue to, as Cory Doctorow says, enshittify, right? The only reason they don't crumble is that they have this lock in with users. They have this monopoly power, but you see a, you know, a TikTok pops up and suddenly Instagram has a real competitor, not because rules have been put in place to change Instagram, but because a different, new maybe better platform.
KARA SWISHER: There’s nothing like competition, making things better. Right? Competition always helps.
JASON KELLEY: Yeah, when I think of competition law, I think of crushing companies, I think of breaking them up. But what do you think we can do to make this sort of world better and more fertile for new companies? You know, you talked earlier about tech workers.
KARA SWISHER: Well, you have to pass those things where they don't get to. Antitrust is the best way to do that. Right? And, but those things move really slowly, unfortunately. And, you know, good antitrust legislation and antitrust enforcement, that's happening right now. But it opens up, I mean, the reason Google exists is 'cause of the antitrust actions around Microsoft.
And so we have to like continue to press on things like that and continue to have regulators that are allowed to pursue cases like that. And then at the same time have a real focus on creating wealth. We wanna create wealth, we wanna create, we wanna give people breaks.
We wanna have the government involved in funding some of these things, making it so that small companies don't get run over by larger companies.
Not letting power concentrate into a small group of people. When that happens, that's what happens. You end up with less companies. They kill them in the crib, these companies. And so not letting things get bought, have a scrutiny over things, stuff like that.
CINDY COHN: Yeah, I think a lot more merger review makes a lot of sense. I think a lot of thinking about, how are companies crushing each other and what are the things that we can do to try to stop that? Obviously we care a lot about interoperability, making sure that technologies that, that have you as a customer don't get to lock you in, and make it so that you're just stuck with their broken business model and can do other things.
There's a lot of space for that kind of thing. I mean, you know, I always tell the story, I'm sure you know this, that, you know, if it weren't for the FCC telling AT&T that they had to let people plug something other than phones into the wall, we wouldn't have had the internet, you know, the home internet revolution anyway.
KARA SWISHER: Right. Absolutely. 100%.
CINDY COHN: Yeah, so I think we are in agreement with you that, you know, competition is really central, but it's, you know, it's kind of an all of the above and certainly around privacy issues. We can do a lot around this business model. Which I think is driving so many of the other bad things that we are seeing, um, with some comprehensive privacy law.
But boy, it sure feels like right now, you know, we got two branches of government that are not on board with that. And the third one kind of doing okay, but not, you know, and the courts were doing okay, but slowly and inconsistently. Um, where do you see hope? Where are you, where are you looking for the for
KARA SWISHER: I mean, some of this stuff around AI could be really great for humanity, or it could be great for a small amount of people. That's really, you know, which one do we want? Do we want this technology to be a tool or a weapon against us? Do we want it to be in the hands of bigger companies or in the hands of all of us and we make decisions around it?
Will it help us be safer? Will it help us cure cancer or is it gonna just make a rich person a billion dollars richer? I mean, it's the age old story, isn't it? This is not a new theme in America where, the rich get richer and the poor get less. And so these, these technologies could, as you know, recently out a book all abundance.
It could create lots of abundance. It could create lots of interesting new jobs, or it could just put people outta work and let the, let the people who are richer get richer. And I don't think that's a society we wanna have. And years ago I was talking about income inequality with a really wealthy person and I said, you either have to do something about, you know, the fact that people, that we didn't have a $25 minimum wage, which I think would help a lot, lots of innovation would come from that. If people made more money, they'd have a little more choices. And it's worth the investment in people to do that.
And I said, we have to either deal with income inequality or armor plate your Tesla. Tesla. And I think he wanted to armor plate his Tesla. That's when ire, and then of course, cyber truck comes out. So there you have it. But, um, I think they don't care about that kind of stuff. You know, they're happy to create their little, we, those little worlds where they're highly protected, but it's not a world I wanna live in.
CINDY COHN: Kara, thank you so much. We really appreciate you coming in. I think you sit in such a different place in the world than where we sit, and it's always great to get your perspective.
KARA SWISHER: Absolutely. Anytime. You guys do amazing work and you know you're doing amazing work and you should always keep a watch on these people. It's not, you shouldn't be against everything. 'cause some people are right. But you certainly should keep a watch on people
CINDY COHN: Well, great. We, we sure will.
JASON KELLEY: up. Yeah, we'll keep doing it. Thank you,
CINDY COHN: Thank you.
KARA SWISHER: All right. Thank you so much.
CINDY COHN: Well, I always appreciate how Kara gets right to the point about how the concentration of power among a few tech moguls has led to so many of the problems we face online and how competition. Along with some things, we so often hear about real laws requiring transparency, privacy protections, and data protections can help shift the tide.
JASON KELLEY: Yeah, you know, some of these fixes are things that people have been talking about for a long time and I think we're at a point where everyone agrees on a big chunk of them. You know, especially the ones that we promote like competition and transparency oftentimes, and privacy. So it's great to hear that Kara, who's someone that, you know, has worked on this issue and in tech for a long time and thought about it and loves it, as she said, you know, agrees with us on some of the, some of the most important solutions.
CINDY COHN: Sometimes these criticisms of the tech moguls can feel like something everybody does, but I think it's important to remember that Kara was really one of the first ones to start pointing this out. And I also agree with you, you know, she's a person who comes from the position of really loving tech. And Kara's even a very strong capitalist. She really loves making money as well. You know, her criticism comes from a place of betrayal, that, again, like Molly White, earlier this season, kind of comes from a position of, you know, seeing the possibilities and loving the possibilities, and then seeing how horribly things are really going in the wrong direction.
JASON KELLEY: Yeah, she has this framing of, is it a tool or a weapon? And it feels like a lot of the tools that she loved became weapons, which I think is how a lot of us feel. You know, it's not always clear how to draw that line. But it's obviously a good question that people, you know, working in the tech field, and I think people even using technology should ask themselves, when you're really enmeshed with it, is the thing you're using or building or promoting, is it working for everyone?
You know, what are the chances, how could it become a weapon? You know, this beautiful tool that you're loving and you have all these good ideas and, you know, ideas that, that it'll change the world and improve it. There's always a way that it can become a weapon. So I think it's an important question to ask and, and an important question that people, you know, working in the field need to ask.
CINDY COHN: Yeah. And I think that, you know, that's the gem of her advice to tech workers. You know, find a way to make your voice heard if you see this happening. And there's a power in that. I do think that one thing that's still true in Silicon Valley is they compete for top talent.
And, you know, top talent indicating that they're gonna make choices based on some values is one of the levers of power. Now I don't think anybody thinks that's the only one. This isn't an individual responsibility question. We need laws, we need structures. You know, we need some structural changes in antitrust law and elsewhere in order to make that happen. It's not all on the shoulders of the tech workers, but I appreciate that she really did say, you know, there's a role to be played here. You're not just pawns in this game.
JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program for Public Understanding of Science and Technology. We'll see you next time. I'm Jason Kelley.
CINDY COHN: And I'm Cindy Cohn.
MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.
Ryanair’s CFAA Claim Against Booking.com Has Nothing To Do with Actual Hacking
The Computer Fraud and Abuse Act (CFAA) is supposed to be about attacks on computer systems. It is not, as a federal district court suggested in Ryanair v. Booking.com, applicable when someone uses valid login credentials to access information to which those credentials provide access. Now that the case is on appeal, EFF has filed an amicus brief asking the Third Circuit to clarify that this case is about violations to policy, not hacking, and does not qualify as access “without authorization” under CFAA.
The case concerns transparency in airfare pricing. Ryanair complained that Booking republished Ryanair’s prices, some of which were only visible when a user logged in. Ryanair sent a cease and desist to Booking, but didn't deactivate the usernames and passwords associated with the uses they disliked. When the users allegedly connected to Booking kept using those credentials to gather pricing data, Ryanair claimed it was a CFAA violation. If this doesn’t sound like “computer hacking” to you, you’re right.
The CFAA has proven bad for research, security, competition, and innovation. For years we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Ryanair’s claims here: what amounts to a terms of use violation because the information that was accessed is available to anyone with login credentials. This is the course charted Van Buren v. United States, where the Supreme Court explained that “authorization” refers to technical concepts of computer authentication. As we stated in our brief:
The CFAA does not apply to every person who merely violates terms of service by sharing account credentials with a family member or by withholding sensitive information like one’s real name and birthdate when making an account.
Building on the good decisions in Van Buren and the Ninth Circuit’s ruling in hiQ Labs v. LinkedIn, we weighed in at the Third Circuit urging the court to hold clearly that triggering a CFAA violation requires bypassing a technology that restricts access. In this case, the login credentials that were created were legit access. But the rule adopted by the lower court would criminize many everyday behaviors, like logging into a streaming service account with a partner’s login, or logging into a spouse’s bank account to pay a bill at their behest. This is not hacking or a violation of the CFAA, it’s just violating a company’s wish list in its Terms of Service.
This rule would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may make different accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, the company could not just ban the researchers from using the site, it could render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.
Many other examples and common research techniques used by journalists, academic researchers, and security researchers would be at risk under this rule, but the end result would be the same no matter what: it would chill valuable research that keeps us all safer online.
A broad reading of CFAA in this case would also undermine competition by providing a way for companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.
Courts must follow Van Buren’s lead and interpret the CFAA as narrowly as it was designed. Logging into a public website with valid credentials, even if you scrape the data once you’re logged in, is not hacking. A broad reading leads to unintended consequences, and website owners do not need new shields against independent accountability.
You can read our amicus brief here.
You Went to a Drag Show—Now the State of Florida Wants Your Name
If you thought going to a Pride event or drag show was just another night out, think again. If you were in Florida, it might land your name in a government database.
That’s what’s happening in Vero Beach, FL, where the Florida Attorney General’s office has subpoenaed a local restaurant, The Kilted Mermaid, demanding surveillance video, guest lists, reservation logs, and contracts of performers and other staff—all because the venue hosted an LGBTQ+ Pride event.
To be clear: no one has been charged with a crime, and the law Florida is likely leaning on here—the so-called “Protection of Children Act” (which was designed to be a drag show ban)—has already been blocked by federal courts as likely unconstitutional. But that didn’t stop Attorney General James Uthmeier from pushing forward anyway. Without naming a specific law that was violated, the AG’s press release used pointed and accusatory language, stating that "In Florida, we don't sacrifice the innocence of children for the perversions of some demented adults.” His office is now fishing for personal data about everyone who attended or performed at the event. This should set off every civil liberties alarm bell we have.
Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy.
Drag shows—many of which are family-friendly and feature no sexual content—have become a political scapegoat. And while that rhetoric might resonate in some media environments, the real-world consequences are much darker: state surveillance of private citizens doing nothing but attending a fun community celebration. By demanding video surveillance, guest lists, and reservation logs, the state isn’t investigating a crime, it is trying to scare individuals from attending a legal gathering. These are people who showed up at a public venue for a legal event, while a law restricting it was not even in effect.
The Supreme Court has ruled multiple times that subpoenas forcing disclosure of members of peaceful organizations have a chilling effect on free expression. Whether it’s a civil rights protest, a church service, or, yes, a drag show: the First Amendment protects the confidentiality of lists of attendees.
Even if the courts strike down this subpoena—and they should—the damage will already be done. A restaurant owner (who also happens to be the town’s vice mayor) is being dragged into a state investigation. Performers’ identities are potentially being exposed—whether to state surveillance, inclusion in law enforcement databases, or future targeting by anti-LGBTQ+ groups. Guests who thought they were attending a fun community event are now caught up in a legal probe. These are the kinds of chilling, damaging consequences that will discourage Floridians from hosting or attending drag shows, and could stamp out the art form entirely.
EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database.
Just Banning Minors From Social Media Is Not Protecting Them
By publishing its guidelines under Article 28 of the Digital Services Act, the European Commission has taken a major step towards social media bans that will undermine privacy, expression, and participation rights for young people that are already enshrined in international human rights law.
EFF recently submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Online safety for young people must include privacy and security for them and must not come at the expense of freedom of expression and equitable access to digital spaces.
Article 28 requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. But the article also prohibits targeting minors with personalized ads, a measure that would seem to require that platforms know that a user is a minor. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy and requiring platforms to know the age of every user. The DSA does not resolve this tension. Rather, it states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage.
Thus, the question of age checks is a key to understanding the obligations of online platforms to safeguard minors online. Our submission explained the serious concerns that age checks pose to the rights and security of minors. All methods for conducting age checks come with serious drawbacks. Approaches to verify a user’s age generally involve some form of government-issued ID document, which millions of people in Europe—including migrants, members of marginalized groups and unhoused people, exchange students, refugees and tourists—may not have access to.
Other age assurance methods, like biometric age estimation, age estimation based on email addresses or user activity, involve the processing of vast amounts of personal, sensitive data – usually in the hands of third parties. Beyond being potentially exposed to discrimination and erroneous estimations, users are asked to trust platforms’ intransparent supply chains and hope for the best. Age assurance methods always impact the rights of children and teenagers: Their rights to privacy and data protection, free expression, information and participation.
The Commission's guidelines contain a wealth of measures elucidating the Commission's understanding of "age appropriate design" of online services. We have argued that some of them, including default settings to protect users’ privacy, effective content moderation and ensuring that recommender systems’ don’t rely on the collection of behavioral data, are practices that would benefit all users.
But while the initial Commission draft document considered age checks as only a tool to determine users’ ages to be able to tailor their online experiences according to their age, the final guidelines go far beyond that. Crucially, the European Commission now seems to consider “measures restricting access based on age to be an effective means to ensure a high level of privacy, safety and security for minors on online platforms” (page 14).
This is a surprising turn, as many in Brussels have considered social media bans like the one Australia passed (and still doesn’t know how to implement) disproportionate. Responding to mounting pressure from Member States like France, Denmark, and Greece to ban young people under a certain age from social media platforms, the guidelines contain an opening clause for national rules on age limits for certain services. According to the guidelines, the Commission considers such access restrictions appropriate and proportionate where “union or national law, (...) prescribes a minimum age to access certain products or services (...), including specifically defined categories of online social media services”. This opens the door for different national laws introducing different age limits for services like social media platforms.
It’s concerning that the Commission generally considers the use of age verification proportionate in any situation where a provider of an online platform identifies risks to minors’ privacy, safety, or security and those risks “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” (page 17). This view risks establishing a broad legal mandate for age verification measures.
It is clear that such bans will do little in the way of making the internet a safer space for young people. By banning a particularly vulnerable group of users from accessing platforms, the providers themselves are let off the hook: If it is enough for platforms like Instagram and TikTok to implement (comparatively cheap) age restriction tools, there are no incentives anymore to actually make their products and features safer for young people. Banning a certain user group changes nothing about problematic privacy practices, insufficient content moderation or business models based on the exploitation of people’s attention and data. And assuming that teenagers will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.
Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy
In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.
Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.
The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.
Zero Knowledge Proofs: The Good NewsThe biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.
Zero Knowledge Proofs: The Bad NewsWhat ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.
ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.
Protecting The Way ForwardMandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.
Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.
Canada’s Bill C-2 Opens the Floodgates to U.S. Surveillance
The Canadian government is preparing to give away Canadians’ digital lives—to U.S. police, to the Donald Trump administration, and possibly to foreign spy agencies.
Bill C-2, the so-called Strong Borders Act, is a sprawling surveillance bill with multiple privacy-invasive provisions. But the thrust is clear: it’s a roadmap to aligning Canadian surveillance with U.S. demands.
It’s also a giveaway of Canadian constitutional rights in the name of “border security.” If passed, it will shatter privacy protections that Canadians have spent decades building. This will affect anyone using Canadian internet services, including email, cloud storage, VPNs, and messaging apps.
A joint letter, signed by dozens of Canadian civil liberties groups and more than a hundred Canadian legal experts and academics, puts it clearly: Bill C-2 is “a multi-pronged assault on the basic human rights and freedoms Canada holds dear,” and “an enormous and unjustified expansion of power for police and CSIS to access the data, mail, and communication patterns of people across Canada.”
Bill C-2 isn’t just a domestic surveillance bill. It’s a Trojan horse for U.S. law enforcement—quietly building the pipes to ship Canadians’ private data straight to Washington.
If Bill C-2 passes, Canadian police and spy agencies will be able to demand information about peoples’ online activities based on the low threshold of “reasonable suspicion.” Companies holding such information would have only five days to challenge an order, and blanket immunity from lawsuits if they hand over data.
Police and CSIS, the Canadian intelligence service, will be able to find out whether you have an online account with any organization or service in Canada. They can demand to know how long you’ve had it, where you’ve logged in from, and which other services you’ve interacted with, with no warrant required.
The bill will also allow for the introduction of encryption backdoors. Forcing companies to surveil their customers is allowed under the law (see part 15), as long as these mandates don’t introduce a “systemic vulnerability”—a term the bill doesn’t even bother to define.
The information gathered under these new powers is likely to be shared with the United States. Canada and the U.S. are currently negotiating a misguided agreement to share law enforcement information under the US CLOUD Act.
The U.S. and U.K. put a CLOUD Act deal in place in 2020, and it hasn’t been good for users. Earlier this year, the U.K. home office ordered Apple to let it spy on users’ encrypted accounts. That security risk caused Apple to stop offering U.K. users certain advanced encryption features, , and lawmakers and officials in the United States have raised concerns that the UK’s demands might have been designed to leverage its expanded CLOUD Act powers.
If Canada moves forward with Bill C-2 and a CLOUD Act deal, American law enforcement could demand data from Canadian tech companies in secrecy—no notice to users would be required. Companies could also expect gag orders preventing them from even mentioning they have been forced to share information with US agencies.
This isn’t speculation. Earlier this month, a Canadian government official told Politico that this surveillance regime would give Canadian police “the same kind of toolkit” that their U.S. counterparts have under the PATRIOT Act and FISA. The bill allows for “technical capability orders.” Those orders mean the government can force Canadian tech companies, VPNs, cloud providers, and app developers—regardless of where in the world they are based—to build surveillance tools into their products.
Under U.S. law, non-U.S. persons have little protection from foreign surveillance. If U.S. cops want information on abortion access, gender-affirming care, or political protests happening in Canada—they’re going to get it. The data-sharing won’t necessarily be limited to the U.S., either. There’s nothing to stop authoritarian states from demanding this new trove of Canadians’ private data that will be secretly doled out by its law enforcement agencies.
EFF joins the Canadian Civil Liberties Association, OpenMedia, researchers at Citizen Lab, and dozens of other Canadian organizations and experts in asking the Canadian federal government to withdraw Bill C-2.
Further reading:
- Joint letter opposing Bill C-2, signed by the Canadian Civil Liberties Association, OpenMedia, Citizen Lab, and dozens of other Canadian groups
- CCLA blog calling for withdrawal of Bill C-2
- CitizenLab (University of Toronto) blog post on Canadian CLOUD Act deal
- CitizenLab blog post on Bill C-2
- EFF one-pager and blog on problems with the CLOUD Act, published before the bill was made law in 2018
You Shouldn’t Have to Make Your Social Media Public to Get a Visa
The Trump administration is continuing its dangerous push to surveil and suppress foreign students’ social media activity. The State Department recently announced an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department also indicated that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.
The administration is penalizing prospective students and visitors for shielding their social media accounts from the general public or for choosing to not be active on social media. This is an outrageous violation of privacy, one that completely disregards the legitimate and often critical reasons why millions of people choose to lock down their social media profiles, share only limited information about themselves online, or not engage in social media at all. By making students abandon basic privacy hygiene as the price of admission to American universities, the administration is forcing applicants to expose a wealth of personal information to not only the U.S. government, but to anyone with an internet connection.
Why Social Media Privacy MattersThe administration’s new policy is a dangerous expansion of existing social media collection efforts. While the State Department has required since 2019 that visa applicants disclose their social media handles—a policy EFF has consistently opposed—forcing applicants to make their accounts public crosses a new line.
Individuals have significant privacy interests in their social media accounts. Social media profiles contain some of the most intimate details of our lives, such as our political views, religious beliefs, health information, likes and dislikes, and the people with whom we associate. Such personal details can be gleaned from vast volumes of data given the unlimited storage capacity of cloud-based social media platforms. As the Supreme Court has recognized, “[t]he sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions”—all of which and more are available on social media platforms.
By requiring visa applicants to share these details, the government can obtain information that would otherwise be inaccessible or difficult to piece together across disparate locations. For example, while visa applicants are not required to disclose their political views in their applications, applicants might choose to post their beliefs on their social media profiles.
This information, once disclosed, doesn’t just disappear. Existing policy allows the government to continue surveilling applicants’ social media profiles even once the application process is over. And personal information obtained from applicants’ profiles can be collected and stored in government databases for decades.
What’s more, by requiring visa applicants to make their private social media accounts public, the administration is forcing them to expose troves of personal, sensitive information to the entire internet, not just the U.S. government. This could include various bad actors like identity thieves and fraudsters, foreign governments, current and prospective employers, and other third parties.
Those in applicants’ social media networks—including U.S. citizen family or friends—can also become surveillance targets by association. Visa applicants’ online activity is likely to reveal information about the users with whom they’re connected. For example, a visa applicant could tag another user in a political rant or posts photos of themselves and the other user at a political rally. Anyone who sees those posts might reasonably infer that the other user shares the applicant’s political beliefs. The administration’s new requirement will therefore publicly expose the personal information of millions of additional people, beyond just visa applicants.
There are Very Good Reasons to Keep Social Media Accounts PrivateAn overwhelming number of social media users maintain private accounts for the same reason we put curtains on our windows: a desire for basic privacy. There are numerous legitimate reasons people choose to share their social media only with trusted family and friends, whether that’s ensuring personal safety, maintaining professional boundaries, or simply not wanting to share personal profiles with the entire world.
Safety from Online Harassment and Physical ViolenceMany people keep their accounts private to protect themselves from stalkers, harassers, and those who wish them harm. Domestic violence survivors, for example, use privacy settings to hide from their abusers, and organizations supporting survivors often encourage them to maintain a limited online presence.
Women also face a variety of gender-based online harms made worse by public profiles, including stalking, sexual harassment, and violent threats. A 2021 study reported that at least 38% of women globally had personally experienced online abuse, and at least 85% of women had witnessed it. Women are, in turn, more likely to activate privacy settings than men.
LGBTQ+ individuals similarly have good reasons to lock down their accounts. Individuals from countries where their identity puts them in danger rely on privacy protections to stay safe from state action. People may also reasonably choose to lock their accounts to avoid the barrage of anti-LGBTQ+ hate and harassment that is common on social media platforms, which can lead to real-world violence. Others, including LGBTQ+ youth, may simply not be ready to share their identity outside of their chosen personal network.
Political Dissidents, Activists, and JournalistsActivists working on sensitive human rights issues, political dissidents, and journalists use privacy settings to protect themselves from doxxing, harassment, and potential political persecution by their governments.
Rather than protecting these vulnerable groups, the administration’s policy instead explicitly targets political speech. The State Department has given embassies and consulates a vague directive to vet applicants’ social media for “hostile attitudes towards our citizens, culture, government, institutions, or founding principles,” according to an internal State Department cable obtained by multiple news outlets. This includes looking for “applicants who demonstrate a history of political activism.” The cable did not specify what, exactly, constitutes “hostile attitudes.”
Professional and Personal BoundariesPeople use privacy settings to maintain boundaries between their personal and professional lives. They share family photos, sensitive updates, and personal moments with close friends—not with their employers, teachers, professional connections, or the general public.
The Growing Menace of Social Media SurveillanceThis new policy is an escalation of the Trump administration’s ongoing immigration-related social media surveillance. EFF has written about the administration’s new “Catch and Revoke” effort, which deploys artificial intelligence and other data analytic tools to review the public social media accounts of student visa holders in an effort to revoke their visas. And EFF recently submitted comments opposing a USCIS proposal to collect social media identifiers from visa and green card holders already living in the U.S., including when they submit applications for permanent residency and naturalization.
The administration has also started screening many non-citizens' social media accounts for ambiguously-defined “antisemitic activity,” and previously announced expanded social media vetting for any visa applicant seeking to travel specifically to Harvard University for any purpose.
The administration claims this mass surveillance will make America safer, but there’s little evidence to support this. By the government’s own previous assessments, social media surveillance has not proven effective at identifying security threats.
At the same time, these policies gravely undermine freedom of speech, as we recently argued in our USCIS comments. The government is using social media monitoring to directly target and punish through visa denials or revocations foreign students and others for their digital speech. And the social media surveillance itself broadly chills free expression online—for citizens and non-citizens alike.
In defending the new requirement, the State Department argued that a U.S. visa is a “privilege, not a right.” But privacy and free expression should not be privileges. These are fundamental human rights, and they are rights we abandon at our peril.
We're Envisioning A Better Future
Whether you've been following EFF for years or just discovered us (hello!), you've probably noticed that our team is kind of obsessed with the ✨future✨.
From people soaring through the sky, to space cats, geometric unicorns, and (so many) mechas—we're always imagining what the future could look like when we get things right.
That same spirit inspired EFF's 35th anniversary celebration. And this year, members can get our new EFF 35 Cityscape t-shirt plus a limited-edition challenge coin with a monthly or annual Sustaining Donation!
Start a Convenient recurring donation Today!
The EFF 35 Cityscape proposes a future where users are empowered to
- Repair and tinker with their devices
- Move freely without being tracked
- Innovate with bold new ideas
And this future isn't far off—we're building it now.
EFF is pushing for right to repair laws across the country, exposing shady data brokers, and ensuring new technologies—like AI—have your rights in mind. EFF is determined and with your help, we're not backing down.
We're making real progress—but we need your help. As a member-supported nonprofit, you are what powers this work.
Start a Sustaining Donation of $5/month or $65/year by August 11, and we'll thank you with a limited-edition EFF35 Challenge Coin as well as this year's Cityscape t-shirt!
EFF to Court: Protect Our Health Data from DHS
The federal government is trying to use Medicaid data to identify and deport immigrants. So EFF and our friends at EPIC and the Protect Democracy Project have filed an amicus brief asking a judge to block this dangerous violation of federal data privacy laws.
Last month, the AP reported that the U.S. Department of Health and Human Services (HHS) had disclosed to the U.S. Department of Homeland Security (DHS) a vast trove of sensitive data obtained from states about people who obtain government-assisted health care. Medicaid is a federal program that funds health insurance for low-income people; it is partially funded and primarily managed by states. Some states, using their own funds, allow enrollment by non-citizens. HHS reportedly disclosed to DHS the Medicaid enrollee data from several of these states, including enrollee names, addresses, immigration status, and claims for health coverage.
In response, California and 19 other states sued HHS and DHS. The states allege, among other things, that these federal agencies violated (1) the data disclosure limits in the Social Security Act, the Privacy Act, and HIPAA, and (2) the notice-and-comment requirements for rulemaking under the Administrative Procedure Act (APA).
Our amicus brief argues that (1) disclosure of sensitive Medicaid data causes a severe privacy harm to the enrolled individuals, (2) the APA empowers federal courts to block unlawful disclosure of personal data between federal agencies, and (3) the broader public is harmed by these agencies’ lack of transparency about these radical changes in data governance.
A new agency agreement, recently reported by the AP, allows Immigration and Customs Enforcement (ICE) to access the personal data of Medicaid enrollees held by HHS’ Centers for Medicare and Medicaid Services (CMS). The agreement states: “ICE will use the CMS data to allow ICE to receive identity and location information on aliens identified by ICE.”
In the 1970s, in the wake of the Watergate and COINTELPRO scandals, Congress wisely enacted numerous laws to protect our data privacy from government misuse. This includes strict legal limits on disclosure of personal data within an agency, or from one agency to another. EFF sued over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, and filed an amicus brief in a suit challenging ICE grabbing taxpayer data. We’ve also reported on the U.S. Department of Agriculture’s grab of food stamp data and DHS’s potential grab of postal data. And we’ve written about the dangers of consolidating all government information.
We have data protection rules for good reason, and these latest data grabs are exactly why.
You can read our new amicus brief here.
Dating Apps Need to Learn How Consent Works
Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit. But dating apps are taking shortcuts in safeguarding the privacy and security of users in favour of developing and deploying AI tools on their platforms, sometimes by using your most personal information to train their AI tools.
Grindr has big plans for its gay wingman bot, Bumble launched AI Icebreakers, Tinder introduced AI tools to choose profile pictures for users, OKCupid teamed up with AI photo editing platform Photoroom to erase your ex from profile photos, and Hinge recently launched an AI tool to help users write prompts.
The list goes on, and the privacy harms are significant. Dating apps have built platforms that encourage people to be exceptionally open with sensitive and potentially dangerous personal information. But at the same time, the companies behind the platforms collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and connection. This data falling into the wrong hands can—and has—come with unacceptable consequences, especially for members of the LGBTQ+ community.
This is why corporations should provide opt-in consent for AI training data obtained through channels like private messages, and employ minimization practices for all other data. Dating app users deserve the right to privacy, and should have a reasonable expectation that the contents of conversations—from text messages to private pictures—are not going to be shared or used for any purpose that opt-in consent has not been provided for. This includes the use of personal data for building AI tools, such as chatbots and picture selection tools.
AI IcebreakersBack in December 2023, Bumble introduced AI Icebreakers to the ‘Bumble for Friends’ section of the app to help users start conversations by providing them with AI-generated messages. Powered by OpenAI’s ChatGPT, the feature was deployed in the app without ever asking for their consent. Instead, the company presented users with a pop-up upon entering the app which repeatedly nudged people to click ‘Okay’ or face the same pop-up every time the app is reopened until individuals finally relent and tap ‘Okay.’
Obtaining user data without explicit opt-in consent is bad enough. But Bumble has taken this even further by sharing personal user data from its platform with OpenAI to feed into the company’s AI systems. By doing this, Bumble has forced its AI feature on millions of users in Europe—without their consent but with their personal data.
In response, European nonprofit noyb recently filed a complaint with the Austrian data protection authority on Bumble’s violation of its transparency obligations under Article 5(1)(a) GDPR. In its report, noyb flagged concerns around Bumble’s data sharing with OpenAI, which allowed the company to generate an opening message based on information users shared on the app.
In its complaint, noyb specifically alleges that Bumble:
- Failed to provide information about the processing of personal data for its AI Icebreaker feature
- Confused users with a “fake” consent banner
- Lacks a legal basis under Article 6(1) GDPR as it never sought user consent and cannot legally claim to base its processing on legitimate interest
- Can only process sensitive data—such as data involving sexual orientation—with explicit consent per Article 9 GDPR
- Failed to adequately respond to the complainant’s access request, regulated through Article 15 GDPR.
Grindr recently launched its AI wingman. The feature operates like a chatbot and currently keeps track of favorite matches and suggests date locations. In the coming years, Grindr plans for the chatbot to send messages to other AI agents on behalf of users, and make restaurant reservations—all without human intervention. This might sound great: online dating without the time investment? A win for some! But privacy concerns remain.
The chatbot is being built in collaboration with a third party company called Ex-human, which raises concerns about data sharing. Grindr has communicated that its users’ personal data will remain on its own infrastructure, which Ex-Human does not have access to, and that users will be “notified” when AI tools are available on the app. The company also said that it will ask users for permission to use their chat history for AI training. But AI data poses privacy risks that do not seem fully accounted for, particularly in places where it’s not safe to be outwardly gay.
In building this ‘gay chatbot,’ Grindr’s CEO said one of its biggest limitations was preserving user privacy. It’s good that they are cognizant of these harms, particularly because the company has a terrible track record of protecting user privacy, and the company was also recently sued for allegedly revealing the HIV status of users. Further, direct messages on Grindr are stored on the company’s servers, where you have to trust they will be secured, respected, and not used to train AI models without your consent. Given Grindr’s poor record of not respecting user consent and autonomy on the platform, users need additional protections and guardrails for their personal data and privacy than currently being provided—especially for AI tools that are being built by third parties.
AI Picture SelectionIn the past year, Tinder and Bumble have both introduced AI tools to help users choose better pictures for their profiles. Tinder’s AI-powered feature, Photo Selector, requires users to upload a selfie, after which its facial recognition technology can identify the person in their camera roll images. The Photo Selector then chooses a “curated selection of photos” direct from users’ devices based on Tinder’s “learnings” about good profile images. Users are not informed about the parameters behind choosing photos, nor is there a separate privacy policy introduced to guardrail privacy issues relating to the potential collection of biometric data, and collection, storage, and sale of camera roll images.
The Way Forward: Opt-In Consent for AI Tools and Consumer Privacy LegislationPutting users in control of their own data is fundamental to protecting individual and collective privacy. We all deserve the right to control how our data is used and by whom. And when it comes to data like profile photos and private messages, all companies should require opt-in consent before processing those messages for AI. Finding love should not involve such a privacy impinging tradeoff.
At EFF, we’ve also long advocated for the introduction of comprehensive consumer privacy legislation to limit the collection of our personal data at its source and prevent retained data being sold or given away, breached by hackers, disclosed to law enforcement, or used to manipulate a user’s choices through online behavioral advertising. This would help protect users on dating apps as reducing the amount of data collected prevents the subsequent use in ways like building AI tools and training AI models.
The privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities, but these steps are essential to protecting users on dating apps. We urge companies to put people over profit and protect privacy on their platforms.
When Your Power Meter Becomes a Tool of Mass Surveillance
Simply using extra electricity to power some Christmas lights or a big fish tank shouldn’t bring the police to your door. In fact, in California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise “smart” meter data in most cases.
Despite this, Sacramento’s power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. The Electronic Frontier Foundation (EFF) is seeking to end Sacramento’s dragnet surveillance of energy customers and have asked for a court order to stop this practice for good.
For a decade, the Sacramento Municipal Utilities District (SMUD) has been searching through all of its customers’ energy data, and passed on more than 33,000 tips about supposedly “high” usage households to police. Ostensibly looking for homes that were growing illegal amounts of cannabis, SMUD analysts have admitted that such “high” power usage could come from houses using air conditioning or heat pumps or just being large. And the threshold of so-called “suspicion” has steadily dropped, from 7,000 kWh per month in 2014 to just 2,800 kWh a month in 2023. One SMUD analyst admitted that they themselves “used 3500 [kWh] last month.”
This scheme has targeted Asian customers. SMUD analysts deemed one home suspicious because it was “4k [kWh], Asian,” and another suspicious because “multiple Asians have reported there.” Sacramento police sent accusatory letters in English and Chinese, but no other language, to residents who used above-average amounts of electricity.
In 2022, EFF and the law firm Vallejo, Antolin, Agarwal, Kanter LLP sued SMUD and the City of Sacramento, representing the Asian American Liberation Network and two Sacramento County residents. One is an immigrant from Vietnam. Sheriff’s deputies showed up unannounced at his home, falsely accused him of growing cannabis based on an erroneous SMUD tip, demanded entry for a search, and threatened him with arrest when he refused. He has never grown cannabis; rather, he consumes more than average electricity due to a spinal injury.
Last week, we filed our main brief explaining how this surveillance program violates the law and why it must be stopped. California’s state constitution bars unreasonable searches. This type of dragnet surveillance — suspicionless searches of entire zip codes worth of customer energy data — is inherently unreasonable. Additionally, a state statute generally prohibits public utilities from sharing such data. As we write in our brief, the Sacramento’s mass surveillance scheme does not qualify for one of the narrow exceptions to this rule.
Mass surveillance violates the privacy of many individuals, as police without individualized suspicion seek (possibly non-existent) evidence of some kind of offense by some unknown person. As we’ve seen time and time again, innocent people inevitably get caught in the dragnet. For decades, EFF has been exposing and fighting these kinds of dangerous schemes. We remain committed to protecting digital privacy, whether it’s being threatened by national governments – or your local power company.
Related Cases: Asian American Liberation Network v. SMUD, et al.EFF to Court: The DMCA Didn't Create a New Right of Attribution, You Shouldn't Either
Amid a wave of lawsuits targeting how AI companies use copyrighted works to train large language models that generate new works, a peculiar provision of copyright law is suddenly in the spotlight: Section 1202 of the Digital Millennium Copyright Act (DMCA). Section 1202 restricts intentionally removing or changing copyright management information (CMI), such as a signature on a painting or attached to a photograph. Passed in 1998, the rule was supposed to help rightsholders identify potentially infringing uses of their works and encourage licensing.
Open AI and Microsoft used code from Github as part of the training data for their LLMs, along with billions of other works. A group of anonymous Github contributors sued, arguing that those LLMs generated new snippets of code that were substantially similar to theirs—but with the CMI stripped. Notably, they did not claim that the new code was copyright infringement—they are relying solely on Section 1202 of the DMCA. Their problem? The generated code is different from their original work, and courts across the US have adopted an “identicality rule,” on the theory that Section 1202 is supposed to apply only when CMI is removed from existing works, not when it’s simply missing from a new one.
It may sound like an obscure legal question, but the outcome of this battle—currently before the Ninth Circuit Court of Appeals—could have far-reaching implications beyond generative AI technologies. If the rightholders were correct, Section 1202 effectively creates a freestanding right of attribution, creating potential liability even for non-infringing uses, such as fair use, if those new uses simply omit the CMI. While many fair users might ultimately escape liability under other limitations built into Section 1202, the looming threat of litigation, backed by risk of high and unpredictable statutory penalties, will be enough to pressure many defendants to settle. Indeed, an entire legal industry of “copyright trolls” has emerged to exploit this dynamic, with no corollary benefit to creativity or innovation.
Fortunately, as we explain in a brief filed today, the text of Section 1202 doesn’t support such an expansive interpretation. The provision repeatedly refers to “works” and “copies of works”—not “substantially similar” excerpts or new adaptations—and its focus on “removal or alteration” clearly contemplates actions taken with respect to existing works, not new ones. Congress could have chosen otherwise and written the law differently. Wisely it did not, thereby ensuring that rightsholders couldn’t leverage the omission of CMI to punish or unfairly threaten otherwise lawful re-uses of a work.
Given the proliferation of copyrighted works in virtually every facet of daily life, the last thing any court should do is give rightsholders a new, freestanding weapon against fair uses. As the Supreme Court once observed, copyright is a “tax on readers for the purpose of giving a bounty to writers.” That tax—including the expense of litigation—can be an important way to encourage new creativity, but it should not be levied unless the Copyright Act clearly requires it.
California A.B. 412 Stalls Out—A Win for Innovation and Fair Use
A.B. 412, the flawed California bill that threatened small developers in the name of AI “transparency,” has been delayed and turned into a two-year bill. That means it won’t move forward in 2025—a significant victory for innovation, freedom to code, and the open web.
EFF opposed this bill from the start. A.B. 412 tried to regulate generative AI, not by looking at the public interest, but by mandating training data “reading lists” designed to pave the way for new copyright lawsuits, many of which are filed by large content companies.
Transparency in AI development is a laudable goal. But A.B. 412 failed to offer a fair or effective path to get there. Instead, it gave companies large and small the impossible task of differentiating between what content was copyrighted and what wasn’t—with severe penalties for anyone who couldn’t meet that regulation. That would have protected the largest AI companies, but frozen out smaller and non-commercial developers who might want to tweak or fine-tune AI systems for the public good.
The most interesting work in AI won’t necessarily come from the biggest companies. It will come from small teams, fine-tuning for accessibility, privacy, and building tools that identify AI harms. And some of the most valuable work will be done using source code under permissive licenses.
A.B. 412 ignored those facts, and would have punished some of the most worthwhile projects.
The Bill Blew Off Fair Use RightsThe question of whether—and how much—AI training qualifies as fair use is being actively litigated right now in federal courts. And so far, courts have found much of this work to be fair use. In a recent landmark AI case, Bartz v. Anthropic, for example, a federal judge found that AI training work is “transformative—spectacularly so.” He compared it to how search engines copy images and text in order to provide useful search results to users.
Copyright is federally governed. When states try to rewrite the rules, they create confusion—and more litigation that doesn’t help anyone.
If lawmakers want to revisit AI transparency, they need to do so without giving rights-holders a tool to weaponize copyright claims. That means rejecting A.B. 412’s approach—and crafting laws that protect speech, competition, and the public’s interest in a robust, open, and fair AI ecosystem.
Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance
Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices.
This is a bad, bad step for Ring and the broader public.
Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement.
Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device.
It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted.
Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device.
After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance.
Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously.
No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.
Shame on Ring.
We Support Wikimedia Foundation’s Challenge to UK’s Online Safety Act
The Electronic Frontier Foundation and ARTICLE 19 strongly support the Wikimedia Foundation’s legal challenge to the categorization regulations of the United Kingdom’s Online Safety Act.
The Foundation – the non-profit that operates Wikipedia and other Wikimedia projects – announced its legal challenge earlier this year, arguing that the regulations endanger Wikipedia and the global community of volunteer contributors who create the information on the site. The High Court of Justice in London will hear the challenge on July 22 and 23.
EFF and ARTICLE 19 agree with the Foundation’s argument that, if enforced, the Category 1 duties - the OSA’s most stringent obligations – would undermine the privacy and safety of Wikipedia’s volunteer contributors, expose the site to manipulation and divert essential resources from protecting people and improving the site. For example, because the law requires Category 1 services to allow users to block all unverified users from editing any content they post, the law effectively requires the Foundation to verify the identity of many Wikipedia contributors. However, that compelled verification undermines the privacy that keeps the site’s volunteers safe.
Wikipedia is the world’s most trusted and widely used encyclopedia, with users across the word accessing its wealth of information and participating in free information exchange through the site. The OSA must not be allowed to diminish it and jeopardize the volunteers on which it depends.
Beyond the issues raised in Wikimedia’s lawsuit, EFF and ARTICLE 19 emphasize that the Online Safety Act poses a serious threat to freedom of expression and privacy online, both in the U.K. and globally. Several key provisions of the law become operational July 25, and some companies already are rolling out age-verification mechanisms which undermine free expression and privacy rights of both adults and minors.
Radio Hobbyists, Rejoice! Good News for LoRa & Mesh
A set of radio devices and technologies are opening the doorway to new and revolutionary forms of communication. These have the potential to break down the over-reliance on traditional network hierarchies, and present collaborative alternatives where resistance to censorship, control and surveillance are baked into the network topography itself. Here, we look at a few of these technologies and what they might mean for the future of networked communications.
The idea of what is broadly referred to as mesh networking isn’t new: the resilience and scalability of mesh technology has seen it adopted in router and IoT protocols for decades. What’s new is cheap devices that can be used without a radio license to communicate over (relatively) large distances, or LOng RAnge, thus the moniker LoRa.
Although using different operating frequencies in different countries, LoRa works in essentially the same way everywhere. It uses Chirp Spread Spectrum to broadcast digital communications across a physical landscape, with a range of several kilometers in the right environmental conditions. When other capable devices pick up a signal, they can then pass it along to other nodes until the message reaches its desitination—all without relying on a single centralized host.
These communications are of very low bit-rate—often less than a few KBps (kilobytes per second) at a distance—and use very little power. You won’t be browsing the web or streaming video over LoRa, but it is useful for sending messages in a wide range of situations where traditional infrastructure is lacking or intermittent, and communication with others over dispersed or changing physical terrain is essential. For instance, a growing body of research is showing how Search and Rescue (SAR) teams can greatly benefit from the use of LoRa, specifically when coupled with GPS sensors, and especially when complimented by line-of-sight LoRa repeaters.
MeshtasticThe most popular of these indie LoRa communication systems is Meshtastic by far. For hobbyists just getting started in the world of LoRa mesh communications, it is the easiest way to get up, running, and texting with others in your area that also happen to have a Meshtastic-enabled device. It also facilitates direct communication with other nodes using end-to-end encryption. And by default, a Meshtastic device will repeat messages to others if originating from 3 or fewer nodes (or “hops”) away. This means messages tend to propagate farther with the power of the mesh collaborating to make delivery possible. As a single-application use of LoRa, it is an exciting experiment to take part in.
ReticulumWhile Reticulum is often put into the same category as Meshtastic, and although both enable communication over LoRa, the comparison breaks down quickly after that. Reticulum is not a single application, but an entire network stack that can be arbitrarily configured to connect through existing TCP/IP, the anonymizing I2P network, directly through a local WiFi connection, or through LoRa radios. The Reticulum network’s LXMF transfer protocol allows arbitrary applications to be built on top of it, such as messaging, voice calls, file transfer, and light-weight, text-only browsing. And that’s only to name a few applications which have already been developed—the possibilities are endless.
Although there are a number of community hubs to join which are being run by Reticulum enthusiasts, you don’t have to join any of them, and can build your own Reticulum network with the devices and transports of you and your friends, locally over LoRa or remotely over traditional infrastructure, and bridge them as you please. Nodes themselves are universally addressed and sovereign, meaning they are free to connect anywhere and not lose the universally unique address which defines them. All communications between nodes are encrypted end-to-end, using a strong choice of cryptographic primitives. And although it’s been actively developed for over a decade, it recently reached the noteworthy milestone of a 1.0 release. It’s a very exciting ecosystem to be a part of, and we can’t wait to see the community develop it even further. A number of clients are available to start exploring.
Resilient InfrastructureOn a more somber note, let’s face it: we live in an uncertain world. With the frequency of environmental disasters, political polarization, and infrastructure attacks increasing, the stability of networks we have traditionally relied upon is far from assured.
Yet even with the world as it is, developers are creating new communications networks that have the potential to help in unexpected situations we might find ourselves in. Not only are these technologies built to be useful and resilient, they are also empowering individuals by circumventing censorship and platform control— allowing a way for people to empower each other through sharing resources.
In that way, it can be seen as a technological inheritor of the hopefulness and experimentation—and yes, fun!—that was so present in the early internet. These technologies offer a promising path forward for building our way out of tech dystopia.
EFF and 80 Organizations Call on EU Policymakers to Preserve Net Neutrality in the Digital Networks Act
As the European Commission prepares an upcoming proposal for a Digital Networks Act (DNA), a growing network of groups are raising serious concerns about the resurgence of “fair share” proposals from major telecom operators. The original idea was to introduce network usage fees on certain companies to pay ISPs. We have said it before and we’ll say it again: there is nothing fair about this “fair share” proposal, which could undermine net neutrality and hurt consumers by changing how content is delivered online. Now the EU Commission is toying with an alternative idea: the introduction of a dispute resolution mechanism to foster commercial agreements between tech firms and telecom operators.
EFF recently joined a broad group of more than 80 signatories, from civil society organizations to audio-visual companies in a joint statement aimed at preserving net neutrality in the DNA.
In the letter, we argue that the push to introduce a mandatory dispute resolution mechanism into EU law would pave the way for content and application providers (CAPs) to pay network fees for delivering traffic. These ideas, recycled from 2022, are being marketed as necessary for funding infrastructure, but the real cost would fall on the open internet, competition, and users themselves.
This isn't just about arcane telecom policy—it’s a battle over the future of the internet in Europe. If the DNA includes mechanisms that force payments from CAPs, we risk higher subscription costs, fewer services, and less innovation, particularly for European startups, creatives, and SMEs. Worse still, there’s no evidence of market failure to justify such regulatory intervention. Regulators like BEREC have consistently found that the interconnection market is functioning smoothly. What’s being proposed is nothing short of a power grab by legacy telecom operators looking to resurrect outdated, monopolistic business models. Europe has long championed an open, accessible internet—now’s the time to defend it.