EFF: Updates
The Internet Still Works: Wikipedia Defends Its Editors
Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.
A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia.
Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible.
Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened?
Jacob Rogers: When you're writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information.
Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point.
Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important.
If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them?
If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company.
Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making.
What does a typical complaint look like, and how does the complaint process look?
In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.”
The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first.
If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions.
And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?
That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation.
Section 230 protects us from that, and allows for quick dismissal of invalid claims.
When we're in the United States, then that's really the end of the matter. There’s no way to bypass the community with a lawsuit.
How does dealing with those complaints work in the U.S.? And how is it different abroad?
In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint.
But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court.
In one German case, the court essentially said, "Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that.
What’s the cost of having to go to court regularly to defend user speech?
Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time.
We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.
I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia?
What does the community editing process get right that courts can get wrong?
Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what's going into writing that.
It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that's actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality.
Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content?
One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video.
There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic.
If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content.
This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.
On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech
For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.
Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.
But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power.
The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.
This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.
Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.
It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.
Section 230 Alternatives Would Protect Less SpeechWith so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?
The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.
Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.
Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.
The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.
By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.
In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.
EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.
On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech
For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.
Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.
But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power.
The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.
This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.
Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.
It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.
Section 230 Alternatives Would Protect Less SpeechWith so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?
The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.
Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.
Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.
The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.
By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.
In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.
EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.
RIP Dave Farber, EFF Board Member and Friend
We are sad to report the passing of longtime EFF Board member, Dave Farber. Dave was 91 and lived in Tokyo from age 83, where he was the Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC). Known as the Grandfather of the Internet, Dave made countless contributions to the internet, both directly and through his support for generations of students.
Dave was the longest-serving EFF Board member, having joined in the early 1990s, before the creation of the World Wide Web or the widespread adoption of the internet. Throughout the growth of the internet and the corresponding growth of EFF, Dave remained a consistent, thoughtful, and steady presence on our Board. Dave always gave us credibility as well as ballast. He seemed to know and be respected by everyone who had helped build the internet, having worked with or mentored too many of them to count. He also had an encyclopedic knowledge of the internet's technical history.
From the beginning, Dave saw both the promise and the danger to human rights that would come with the spread of the internet around the world. He committed to helping make sure that the rights and liberties of users and developers, especially the open source community, were protected. He never wavered in that commitment. Ever the teacher, Dave was also a clear explainer of internet technologies and basically unflappable.
Dave also managed the Interesting People email list, which provided news and connection for so many internet pioneers and served as model for how people from disparate corners of the world could engage in a rolling conversation about all things digital. His role as the Chief Technologist at the U.S. Federal Communications Commission from 2000 to 2001 gave him a strong perspective on the ways that government could help or hinder civil liberties in the digital world.
We will miss his calm, thoughtful voice, both inside EFF and out in the world. May his memory be a blessing.
RIP Dave Farber, EFF Board Member and Friend
We are sad to report the passing of longtime EFF Board member, Dave Farber. Dave was 91 and lived in Tokyo from age 83, where he was the Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC). Known as the Grandfather of the Internet, Dave made countless contributions to the internet, both directly and through his support for generations of students.
Dave was the longest-serving EFF Board member, having joined in the early 1990s, before the creation of the World Wide Web or the widespread adoption of the internet. Throughout the growth of the internet and the corresponding growth of EFF, Dave remained a consistent, thoughtful, and steady presence on our Board. Dave always gave us credibility as well as ballast. He seemed to know and be respected by everyone who had helped build the internet, having worked with or mentored too many of them to count. He also had an encyclopedic knowledge of the internet's technical history.
From the beginning, Dave saw both the promise and the danger to human rights that would come with the spread of the internet around the world. He committed to helping make sure that the rights and liberties of users and developers, especially the open source community, were protected. He never wavered in that commitment. Ever the teacher, Dave was also a clear explainer of internet technologies and basically unflappable.
Dave also managed the Interesting People email list, which provided news and connection for so many internet pioneers and served as model for how people from disparate corners of the world could engage in a rolling conversation about all things digital. His role as the Chief Technologist at the U.S. Federal Communications Commission from 2000 to 2001 gave him a strong perspective on the ways that government could help or hinder civil liberties in the digital world.
We will miss his calm, thoughtful voice, both inside EFF and out in the world. May his memory be a blessing.
Op-ed: Weakening Section 230 Would Chill Online Speech
(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)
Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.
The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.
The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.
One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.
Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.
Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.
Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.
All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.
The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.
Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.
This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it.
For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.
When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.
Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.
Op-ed: Weakening Section 230 Would Chill Online Speech
(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)
Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.
The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.
The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.
One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.
Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.
Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.
Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.
All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.
The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.
Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.
This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it.
For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.
When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.
Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.
Yes to the “ICE Out of Our Faces Act”
Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.
Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.
To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.
The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.
As EFF explains in the new bill’s announcement:
It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.
You can read the bill here, and the bill’s announcement here.
Yes to the “ICE Out of Our Faces Act”
Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.
Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.
To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.
The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.
As EFF explains in the new bill’s announcement:
It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.
You can read the bill here, and the bill’s announcement here.
Protecting Our Right to Sue Federal Agents Who Violate the Constitution
Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.
To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.
Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.
The ProblemIn 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.
However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.
So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.” He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”
Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.
In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.
In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling. In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”
Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.
The SolutionAt this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.
In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.
State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.”
This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.
We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.
We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.
Protecting Our Right to Sue Federal Agents Who Violate the Constitution
Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.
To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.
Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.
The ProblemIn 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.
However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.
So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.” He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”
Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.
In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.
In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling. In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”
Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.
The SolutionAt this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.
In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.
State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.”
This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.
We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.
We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.
Smart AI Policy Means Examining Its Real Harms and Benefits
The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.
Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.
We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.
Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.
EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.
So let’s look at the real-world landscape.
AI’s Real and Potential HarmsThinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.
There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on. If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.
And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.
These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool. For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.
These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.
Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.
We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.
Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers.
Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.
Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.
AI’s Real and Potential BenefitsHowever harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.
Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.
To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.
Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.
AI Advancements in Scientific and Medical ResearchAI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.
For example:
- The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
- Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).
Researchers are using AI to help develop new medical treatments:
- Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
- Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
- Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
- Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:
- AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
- Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”
When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:
- The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
- An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.
It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.
Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.
Context MattersIt can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.
Smart AI Policy Means Examing Its Real Harms and Benefits
The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.
Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.
We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.
Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.
EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.
So let’s look at the real-world landscape.
AI’s Real and Potential HarmsThinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.
There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on. If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.
And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.
These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool. For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.
These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.
Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.
We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.
Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers.
Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.
Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.
AI’s Real and Potential BenefitsHowever harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.
Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.
To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.
Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.
AI Advancements in Scientific and Medical ResearchAI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.
For example:
- The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
- Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).
Researchers are using AI to help develop new medical treatments:
- Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
- Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
- Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
- Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:
- AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
- Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”
When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:
- The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
- An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.
It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.
Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.
Context MattersIt can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.
EFF to Close Friday in Solidarity with National Shutdown
The Electronic Frontier Foundation stands with the people of Minneapolis and with all of the communities impacted by the ongoing campaign of ICE and CBP violence. EFF will be closed Friday, Jan. 30 as part of the national shutdown in opposition to ICE and CBP and the brutality and terror they and other federal agencies continue to inflict on immigrant communities and any who stand with them.
We do not make this decision lightly, but we will not remain silent.
- See our statement on ICE/CBP violence: https://www.eff.org/deeplinks/2026/01/eff-statement-lawless-actions-ice-and-cbp
- See our Surveillance Self-Defense tips for protestors: https://ssd.eff.org/module/attending-protest
- See our explanation of the right to record police activity: https://www.eff.org/deeplinks/2025/02/yes-you-have-right-film-ice
EFF to Close Friday in Solidarity with National Shutdown
The Electronic Frontier Foundation stands with the people of Minneapolis and with all of the communities impacted by the ongoing campaign of ICE and CBP violence. EFF will be closed Friday, Jan. 30 as part of the national shutdown in opposition to ICE and CBP and the brutality and terror they and other federal agencies continue to inflict on immigrant communities and any who stand with them.
We do not make this decision lightly, but we will not remain silent.
- See our statement on ICE/CBP violence: https://www.eff.org/deeplinks/2026/01/eff-statement-lawless-actions-ice-and-cbp
- See our Surveillance Self-Defense tips for protestors: https://ssd.eff.org/module/attending-protest
- See our explanation of the right to record police activity: https://www.eff.org/deeplinks/2025/02/yes-you-have-right-film-ice
Introducing Encrypt It Already
Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.
End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.
We’ve divided this up into three categories, each with three different demands:
- Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
- Facebook should use end-to-end encryption for group messages
- Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
- Bluesky should launch its promised end-to-end encryption for DMs
- Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
- Telegram should default to end-to-end encryption for DMs
- WhatsApp should use end-to-end encryption for backups by default
- Ring should enable end-to-end encryption for its cameras by default
- Protect Our Data: New features that companies should launch, often because their competition is doing it already.
- Google should launch end-to-end encryption for Google Authenticator backups
- Google should offer end-to-end encryption for Android backup data
- Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps
What is only half the problem. How is just as important.
What Companies Should Do When They Launch End-to-End Encryption FeaturesThere’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:
- A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
- Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
- Data minimization principles whenever feasible, storing as little metadata as possible.
Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.
What You Can DoWhen it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.
For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like.
In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:
As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.
End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!
Help protect digital privacy & free speech for everyone
Introducing Encrypt It Already
Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.
End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.
We’ve divided this up into three categories, each with three different demands:
- Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
- Facebook should use end-to-end encryption for group messages
- Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
- Bluesky should launch its promised end-to-end encryption for DMs
- Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
- Telegram should default to end-to-end encryption for DMs
- WhatsApp should use end-to-end encryption for backups by default
- Ring should enable end-to-end encryption for its cameras by default
- Protect Our Data: New features that companies should launch, often because their competition is doing it already.
- Google should launch end-to-end encryption for Google Authenticator backups
- Google should offer end-to-end encryption for Android backup data
- Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps
What is only half the problem. How is just as important.
What Companies Should Do When They Launch End-to-End Encryption FeaturesThere’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:
- A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
- Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
- Data minimization principles whenever feasible, storing as little metadata as possible.
Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.
What You Can DoWhen it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.
For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like.
In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:
As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.
End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!
Help protect digital privacy & free speech for everyone
Google Settlement May Bring New Privacy Controls for Real-Time Bidding
EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.
What Is Real-Time Bidding?RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:
- The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
- This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
- The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.
Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.
The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
Proposed Settlement with Google Is a Step in the Right DirectionAs the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.
Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email.
While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default.
The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.
The Real Solution: Ban Online Behavioral AdvertisingLimiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.
Google Settlement May Bring New Privacy Controls for Real-Time Bidding
EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.
What Is Real-Time Bidding?RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:
- The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
- This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
- The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.
Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.
The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
Proposed Settlement with Google Is a Step in the Right DirectionAs the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.
Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email.
While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default.
The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.
The Real Solution: Ban Online Behavioral AdvertisingLimiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.
✍️ The Bill to Hand Parenting to Big Tech | EFFector 38.2
Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. We're diving into the latest attempt to control how kids access the internet and more with our latest EFFector newsletter.
Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks what to do when you hit an age gate online, explains why rent-only copyright culture makes us all worse off, and covers the dangers of law enforcement purchasing straight-up military drones.
Prefer to listen in? In our audio companion, EFF Senior Policy Analyst Joe Mullin explains what lawmakers should do if they really want to help families. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.2 - ✍️ THE BILL TO HAND PARENTING TO BIG TECH
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!
