EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 30 min 53 sec ago

The Trump Administration’s Order on AI Is Deeply Misguided

Thu, 11/20/2025 - 3:10pm

Widespread news reports indicate that President Donald Trump’s administration has prepared an executive order to punish states that have passed laws attempting to address harms from artificial intelligence (AI) systems. According to a draft published by news outlets, this order would direct federal agencies to bring legal challenges to state AI regulations that the administration deems “onerous,”  to restrict funding to those states that have these laws, and to adopt new federal law that overrides state AI laws.

This approach is deeply misguided.

As we’ve said before, the fact that states are regulating AI is often a good thing. Left unchecked, company and government use of automated decision-making systems in areas such as housing, health care, law enforcement, and employment have already caused discriminatory outcomes based on gender, race, and other protected statuses.

While state AI laws have not been perfect, they are genuine attempts to address harms that people across the country face from certain uses of AI systems right now. Given the tone of the Trump Administration’s draft order, it seems clear that the preemptive federal legislation backed by this administration will not stop ways that automated decision making systems can result in discriminatory decisions.

For example, a copy of the draft order published by Politico specifically names the Colorado AI Act as an example of supposedly “onerous” legislation. As we said in our analysis of Colorado’s law, it is a limited but crucial step—one that needs to be strengthened to protect people more meaningfully from AI harms. It is possible to guard against harms and support innovation and expression. Ignoring the harms that these systems can cause when used in discriminatory ways is not the way to do that.

Again: stopping states from acting on AI will stop progress. Proposals such as the executive order, or efforts to put a broad moratorium on state AI laws into the National Defense Authorization Act (NDAA), will hurt us all. Companies that produce AI and automated decision-making software have spent millions in state capitals and in Congress to slow or roll back legal protections regulating artificial intelligence. If reports about the Trump administration’s executive order are true, those efforts are about to get a supercharged ally in the federal government.

And all of us will pay the price.

EFF Demands Answers About ICE-Spotting App Takedowns

Thu, 11/20/2025 - 11:30am
Potential Government Coercion Raises First Amendment Concerns

SAN FRANCISCO – The Electronic Frontier Foundation (EFF) sued the departments of Justice (DOJ) and Homeland Security (DHS) today to uncover information about the federal government demanding that tech companies remove apps that document immigration enforcement activities in communities throughout the country. 

Tech platforms took down several such apps (including ICE Block, Red Dot, and DeICER) and webpages (including ICE Sighting-Chicagoland) following communications with federal officials this year, raising important questions about government coercion to restrict protected First Amendment activity.

"We're filing this lawsuit to find out just what the government told tech companies," said EFF Staff Attorney F. Mario Trujillo. "Getting these records will be critical to determining whether federal officials crossed the line into unconstitutional coercion and censorship of protected speech."

In October, Apple removed ICEBlock, an app that allows users to report Immigration and Customs Enforcement (ICE) activity in their area, from its App Store. Attorney General Pamela Bondi publicly took credit for the takedown, telling reporters, “We reached out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so.” In the days that followed, Apple removed several similar apps from the App Store. Google and Meta removed similar apps and webpages from platforms they own as well. Bondi vowed to “continue engaging tech companies” on the issue. 

People have a protected First Amendment right to document and share information about law enforcement activities performed in public. If government officials coerce third parties into suppressing protected activity, this can be unconstitutional, as the government cannot do indirectly what it is barred from doing directly.

Last month, EFF submitted Freedom of Information Act (FOIA) requests to the DOJ, DHS and its component agencies ICE and Customs and Border Protection. The requests sought records and communications about agency demands that technology companies remove apps and pages that document immigration enforcement activities. So far, none of the agencies have provided these records. EFF's FOIA lawsuit demands their release.

For the complaint: https://www.eff.org/document/complaint-eff-v-doj-dhs-ice-tracking-apps

For more about the litigation: https://www.eff.org/cases/eff-v-doj-dhs-ice-tracking-apps

Tags: ICEContact:  F. Mario TrujilloStaff Attorneymario@eff.org

The Patent Office Is About To Make Bad Patents Untouchable

Wed, 11/19/2025 - 2:04pm

The U.S. Patent and Trademark Office (USPTO) has proposed new rules that would effectively end the public’s ability to challenge improperly granted patents at their source—the Patent Office itself. If these rules take effect, they will hand patent trolls exactly what they’ve been chasing for years: a way to keep bad patents alive and out of reach. People targeted with troll lawsuits will be left with almost no realistic or affordable way to defend themselves.

We need EFF supporters to file public comments opposing these rules right away. The deadline for public comments is December 2. The USPTO is moving quickly, and staying silent will only help those who profit from abusive patents. 

TAKE ACTION

Tell USPTO: The public has a right to challenge bad patents

We’re asking supporters who care about a fair patent system to file comments using the federal government’s public comment system. Your comments don’t need to be long, or use legal or technical vocabulary. The important thing is that everyday users and creators of technology have  the chance to speak up, and be counted. 

Below is a short, simple comment you can copy and paste. Your comment will carry more weight if you add a personal sentence or two of your own. Please note that comments should be submitted under your real name and will become part of the public record. 

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Why This Rule Change Matters

Inter partes review, (IPR), isn’t perfect. It hasn’t eliminated patent trolling, and it’s not available in every case. But it is one of the few practical ways for ordinary developers, small companies, nonprofits, and creators to challenge a bad patent without spending millions of dollars in federal court. That’s why patent trolls hate it—and why the USPTO’s new rules are so dangerous.

IPR isn’t easy or cheap, but compared to years of litigation, it’s a lifeline. When the system works, it removes bogus patents from the table for everyone, not just the target of a single lawsuit. 

IPR petitions are decided by the Patent Trial and Appeal Board (PTAB), a panel of specialized administrative judges inside the USPTO. Congress designed  IPR to provide a fresh, expert look at whether a patent should have been granted in the first place—especially when strong prior art surfaces. Unlike  full federal trials, PTAB review is faster, more technical, and actually accessible to small companies, developers, and public-interest groups.

Here are three real examples of how IPR protected the public: 

  • The “Podcasting Patent” (Personal Audio)

Personal Audio claimed it had “invented” podcasting and demanded royalties from audio creators using its so-called podcasting patent. EFF crowdsourced prior art, filed an IPR, and ultimately knocked out the patent—benefiting  the entire podcasting world.

Under the new rules, this kind of public-interest challenge could easily be blocked based on procedural grounds like timing, before the PTAB even examines the patent. 

  • SportBrain’s “upload your fitness data” patent

SportBrain sued more than 80 companies over a patent that claimed to cover basic gathering of user data and sending it over a network. A panel of PTAB judges canceled every claim.

Under the new rules, this patent could have survived long enough to force dozens more companies to pay up.

For more than a decade, Shipping & Transit sued companies over extremely broad “delivery notifications”patents. After repeated losses at PTAB and in court (including fee awards), the company finally collapsed. 

Under the new rules, a troll like this could keep its patents alive and continue carpet-bombing small businesses with lawsuits.

IPR hasn’t ended patent trolling. But when a troll waves a bogus patent at hundreds or thousands of people, IPR is one of the only tools that can actually fix the underlying problem: the patent itself. It dismantles abusive patent monopolies that never should have existed,   saving entire industries from predatory litigation. That’s exactly why patent trolls and their allies have fought so hard to shut it down. They’ve failed to dismantle IPR in court or in Congress—and now they’re counting on the USPTO’s own leadership to do it for them. 

What the USPTO Plans To Do

First, they want you to give up your defenses in court. Under this proposal, a defendant can’t file an IPR unless they promise to never challenge the patent’s validity in court. 

For someone actually being sued or threatened with patent infringement, that’s simply not a realistic promise to make. The choice would be: use IPR and lose your defenses—or keep your defenses and lose IPR.

Second, the rules allow patents to become “unchallengeable” after one prior fight. That’s right. If a patent survives any earlier validity fight, anywhere, these rules would block everyone else from bringing an IPR, even years later and even if new prior art surfaces. One early decision—even one that’s poorly argued, or didn’t have all the evidence—would block the door on the entire public.

Third, the rules will block IPR entirely if a district court case is projected to move faster than PTAB. 

So if a troll sues you with one of the outrageous patents we’ve seen over the years, like patents on watching an ad, showing picture menus, or clocking in to work, the USPTO won’t even look at it. It’ll be back to the bad old days, where you have exactly one way to beat the troll (who chose the court to sue in)—spend millions on experts and lawyers, then take your chances in front of a federal jury. 

The USPTO claims this is fine because defendants can still challenge patents in district court. That’s misleading. A real district-court validity fight costs millions of dollars and takes years. For most people and small companies, that’s no opportunity at all. 

Only Congress Can Rewrite IPR

IPR was created by Congress in 2013 after extensive debate. It was meant to give the public a fast, affordable way to correct the Patent Office’s own mistakes. Only Congress—not agency rulemaking—can rewrite that system.

The USPTO shouldn’t be allowed to quietly undermine IPR with procedural traps that block legitimate challenges.

Bad patents still slip through every year. The Patent Office issues hundreds of thousands of new patents annually. IPR is one of the only tools the public has to push back.

These new rules rely on the absurd presumption that it’s the defendants—the people and companies threatened by questionable patents—who are abusing the system with multiple IPR petitions, and that they should be limited to one bite at the apple. 

That’s utterly upside-down. It’s patent trolls like Shipping & Transit and Personal Audio that have sued, or threatened, entire communities of developers and small businesses.

When people have evidence that an overbroad patent was improperly granted, that evidence should be heard. That’s what Congress intended. These rules twist that intent beyond recognition. 

In 2023, more than a thousand EFF supporters spoke out and stopped an earlier version of this proposal—your comments made the difference then, and they can again. 

Our principle is simple: the public has a right to challenge bad patents. These rules would take that right away. That’s why it’s vital to speak up now. 

TAKE ACTION

Sample comment: 

I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.

Strengthen Colorado’s AI Act

Wed, 11/19/2025 - 12:37pm

Powerful institutions are using automated decision-making against us. Landlords use it to decide who gets a home. Insurance companies use it to decide who gets health care. ICE uses it to decide who must submit to location tracking by electronic monitoring. Bosses use it to decide who gets fired, and to predict who is organizing a union or planning to quit. Bosses even use AI to assess the body language and voice tone of job candidates. And these systems often discriminate based on gender, race, and other protected statuses.

Fortunately, workers, patients, and renters are resisting.

In 2024, Colorado enacted a limited but crucial step forward against automated abuse: the AI Act (S.B. 24-205). We commend the labor, digital rights, and other advocates who have worked to enact and protect it. Colorado recently delayed the Act’s effective date to June 30, 2026.

EFF looks forward to enforcement of the Colorado AI Act, opposes weakening or further delaying it, and supports strengthening it.

What the Colorado AI Act Does

The Colorado AI Act is a good step in the right direction. It regulates “high risk AI systems,” meaning machine-based technologies that are a “substantial factor” in deciding whether a person will have access to education, employment, loans, government services, healthcare, housing, insurance, or legal services. An AI-system is a “substantial factor” in those decisions if it assisted in the decision and could alter its outcome. The Act’s protections include transparency, due process, and impact assessments.

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it

Transparency. The Act requires “developers” (who create high-risk AI systems) and “deployers” (who use them) to provide information to the general public and affected individuals about these systems, including their purposes, the types and sources of inputs, and efforts to mitigate known harms. Developers and deployers also must notify people if they are being subjected to these systems. Transparency protections like these can be a baseline in a comprehensive regulatory program that facilitates enforcement of other protections.

Due process. The Act empowers people subjected to high-risk AI systems to exercise some self-help to seek a fair decision about them. A deployer must notify them of the reasons for the decision, the degree the system contributed to the decision, and the types and sources of inputs. The deployer also must provide them an opportunity to correct any incorrect inputs. And the deployer must provide them an opportunity to appeal, including with human review.

Impact assessments. The Act requires a developer, before providing a high-risk AI system to a deployer, to disclose known or reasonably foreseeable discriminatory harms by the system, and the intended use of the AI. In turn, the Act requires a deployer to complete an annual impact assessment for each of its high-risk AI systems, including a review of whether they cause algorithmic discrimination. A deployer also must implement a risk management program that is proportionate to the nature and scope of the AI, the sensitivity of the data it processes, and more. Deployers must regularly review their risk management programs to identify and mitigate any known or reasonably foreseeable risks of algorithmic discrimination. Impact assessment regulations like these can helpfully place a proactive duty on developers and deployers to find and solve problems, as opposed to doing nothing until an individual subjected to a high-risk system comes forward to exercise their rights.

How the Colorado AI Act Should Be Strengthened

The Act is a solid foundation. Still, EFF urges Colorado to strengthen it, especially in its enforcement mechanisms.

Private right of action. The Colorado AI Act grants exclusive enforcement to the state attorney general. But no regulatory agency will ever have enough resources to investigate and enforce all violations of a law, and many government agencies get “captured” by the industries they are supposed to regulate. So Colorado should amend its Act to empower ordinary people to sue the companies that violate their legal protections from high-risk AI systems. This is often called a “private right of action,” and it is the best way to ensure robust enforcement. For example, the people of Illinois and Texas on paper have similar rights to biometric privacy, but in practice the people of Illinois have far more enjoyment of this right because they can sue violators.

Civil rights enforcement. One of the biggest problems with high-risk AI systems is that they recurringly have an unfair disparate impact against vulnerable groups, and so one of the biggest solutions will be vigorous enforcement of civil rights laws. Unfortunately, the Colorado AI Act contains a confusing “rebuttable presumption” – that is, an evidentiary thumb on the scale – that may impede such enforcement. Specifically, if a deployer or developer complies with the Act, then they get a rebuttable presumption that they complied with the Act’s requirement of “reasonable care” to protect people from algorithmic discrimination. In practice, this may make it harder for a person subjected to a high-risk AI system to prove their discrimination claim. Other civil rights laws generally do not have this kind of provision. Colorado should amend its Act to remove it.

Next Steps

Colorado is off to an important start. Now it should strengthen its AI Act, and should not weaken or further delay it. Other states must enact their own laws. All manner of automated decision-making systems are unfairly depriving people of jobs, health care, and more.

EFF has long been fighting against such practices. We believe technology should improve everyone’s lives, not subject them to abuse and discrimination. We hope you will join us.

Lawsuit Challenges San Jose’s Warrantless ALPR Mass Surveillance

Tue, 11/18/2025 - 1:11pm
EFF and the ACLU of Northern California Sue on Behalf of Local Nonprofits

Contact: Josh Richman, EFF, jrichman@eff.org;  Carmen King, ACLU of Northern California, cking@aclunc.org

SAN JOSE, Calif. – San Jose and its police department routinely violate the California Constitution by conducting warrantless searches of the stored records of millions of drivers’ private habits, movements, and associations, the Electronic Frontier Foundation (EFF) and American Civil Liberties Union of Northern California (ACLU-NC) argue in a lawsuit filed Tuesday

The lawsuit, filed in Santa Clara County Superior Court on behalf of the Services, Immigrant Rights and Education Network (SIREN) and the Council on American-Islamic Relations – California (CAIR-CA), challenges San Jose police officers’ practice of searching for location information collected by automated license plate readers (ALPRs) without first getting a warrant.  

ALPRs are an invasive mass-surveillance technology: high-speed, computer-controlled cameras that automatically capture images of the license plates of every driver that passes by, without any suspicion that the driver has broken the law. 

“A person who regularly drives through an area subject to ALPR surveillance can have their location information captured multiple times per day,” the lawsuit says. “This information can reveal travel patterns and provide an intimate window into a person’s life as they travel from home to work, drop off their children at school, or park at a house of worship, a doctor’s office, or a protest. It could also reveal whether a person crossed state lines to seek health care in California.”

The San Jose Police Department has blanketed the city’s roadways with nearly 500 ALPRs – indiscriminately collecting millions of records per month about people’s movements – and keeps this data for an entire year. Then the department permits its officers and other law enforcement officials from across the state to search this ALPR database to instantly reconstruct people’s locations over time – without first getting a warrant. This is an unchecked police power to scrutinize the movements of San Jose’s residents and visitors as they lawfully travel to work, to the doctor, or to a protest. 

San Jose’s ALPR surveillance program is especially pervasive: Few California law enforcement agencies retain ALPR data for an entire year, and few have deployed nearly 500 cameras.  

The lawsuit, which names the city, its Police Chief Paul Joseph, and its Mayor Matt Mahan as defendants, asks the court to stop the city and its police from searching ALPR data without first obtaining a warrant. Location information reflecting people’s physical movements, even in public spaces, is protected under the Fourth Amendment according to U.S. Supreme Court case law. The California Constitution is even more protective of location privacy, at both Article I, Section 13 (the ban on unreasonable searches) and Article I, Section 1 (the guarantee of privacy). “The SJPD’s widespread collection and searches of ALPR information poses serious threats to communities’ privacy and freedom of movement."

“This is not just about data or technology — it’s about power, accountability, and our right to move freely without being watched,” said CAIR-San Francisco Bay Area Executive Director Zahra Billoo. “For Muslim communities, and for anyone who has experienced profiling, the knowledge that police can track your every move without cause is chilling. San Jose’s mass surveillance program violates the California Constitution and undermines the privacy rights of every person who drives through the city. We’re going to court to make sure those protections still mean something." 

"The right to privacy is one of the strongest protections that our immigrant communities have in the face of these acts of violence and terrorism from the federal government," said SIREN Executive Director Huy Tran. "This case does not raise the question of whether these cameras should be used. What we need to guard against is a surveillance state, particularly when we have seen other cities or counties violate laws that prohibit collaborating with ICE. We can protect the privacy rights of our residents with one simple rule: Access to the data should only happen once approved under a judicial warrant.”  

For the complaint: https://www.eff.org/files/2025/11/18/siren_v._san_jose_-_filed_complaint.pdf

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs 

Tags: SIREN and CAIR-CA v. San JoseAutomated License Plate Readers (ALPRs)Street Level Surveillance

Speaking Freely: Benjamin Ismail

Tue, 11/18/2025 - 10:58am

Interviewer: Jillian York

Benjamin Ismail is the Campaign and Advocacy Director for GreatFire, where he leads efforts to expose the censorship apparatus of authoritarian regimes worldwide. He also runs/oversees the App Censorship Project, including the AppleCensorship.com and GoogleCensorship.org platforms, which track mobile app censorship globally. From 2011 to 2017, Benjamin headed the Asia-Pacific desk at Reporters Without Borders (RSF).

Jillian York: Hi Benjamin, it's great to chat with you. We got to meet at the Global Gathering recently and we did a short video there and it was wonderful to get to know you a little bit. I'm going to start by asking you my first basic question: What does free speech or free expression mean to you?

Benjamin Ismail: Well, it starts with a very, very big question. What I have in mind is a cliche answer, but it's what I genuinely believe. I think about all freedoms. So when you say free expression, free speech, or freedom of information or Article 19, all of those concepts are linked together, I immediately think of all human rights at once. Because what I have seen during my current or past work is how that freedom is really the cornerstone of all freedom. If you don’t have that, you can’t have any other freedom. If you don’t have freedom of expression, if you don't have journalism, you don't have pluralism of opinions—you have self-censorship.

You have realities, violations, that exist but are not talked about, and are not exposed, not revealed, not tackled, and nothing is really improved without that first freedom. I also think about Myanmar because I remember going there in 2012, when the country had just opened after the democratic revolution. We got the chance to meet with many officials, ministers, and we got to tell them that they should start with that because their speech was “don’t worry, don’t raise freedom of speech, freedom of the press will come in due time.”

And we were saying “no, that’s not how it works!” It doesn’t come in due time when other things are being worked on. It starts with that so you can work on other things. And so I remember very well those meetings and how actually, unfortunately, the key issues that re-emerged afterwards in the country were precisely due to the fact that they failed to truly implement free speech protections when the country started opening.

JY: What was your path to this work?

BI: This is a multi-faceted answer. So, I was studying Chinese language and civilization at the National Institute of Oriental Languages and Civilizations in Paris along with political science and international law. When I started that line of study, I considered maybe becoming a diplomat…that program led to preparing for the exams required to enter the diplomatic corps in France.

But I also heard negative feedback on the Ministry of Foreign Affairs and, notably, first-hand testimonies from friends and fellow students who had done internships there. I already knew that I had a little bit of an issue with authority. My experience as an assistant at Reporters Without Borders challenged the preconceptions I had about NGOs and civil society organizations in general. I was a bit lucky to come at a time when the organization was really trying to find its new direction, its new inspiration. So it a brief phase where the organization itself was hungry for new ideas.

Being young and not very experienced, I was invited to share my inputs, my views—among many others of course. I saw that you can influence an organization’s direction, actions, and strategy, and see the materialization of those strategic choices. Such as launching a campaign, setting priorities, and deciding how to tackle issues like freedom of information, and the protection of journalists in various contexts.

That really motivated me and I realized that I would have much less to say if I had joined an institution such as the Ministry of Foreign Affairs. Instead, I was part of a human-sized group, about thirty-plus employees working together in one big open space in Paris.

After that experience I set my mind on joining the civil society sector, focusing on freedom of the press. on journalistic issues, you get to touch on many different issues in many different regions, and I really like that. So even though it’s kind of monothematic, it's a single topic that's encompassing everything at the same time.

I was dealing with safety issues for Pakistani journalists threatened by the Taliban. At the same time I followed journalists pressured by corporations such as TEPCO and the government in Japan for covering nuclear issues. I got to touch on many topics through the work of the people we were defending and helping. That’s what really locked me onto this specific human right.

 I already had my interest when I was studying in political and civil rights, but after that first experience, at the end of 2010, I went to China and got called by Reporters Without Borders. They told me that the head of the Asia desk was leaving and invited me to apply for the position. At that time, I was in Shanghai, working to settle down there. The alternative was accepting a job that would take me back to Paris but likely close the door on any return to China. Once you start giving interviews to outlets like the BBC and CNN, well… you know how that goes—RSF was not viewed favorably in many countries. Eventually, I decided it was a huge opportunity, so I accepted the job and went back to Paris, and from then on I was fully committed to that issue.

 JY: For our readers, tell us what the timeline of this was.

 BI: I finished my studies in 2009. I did my internship with Reporters Without Borders that year and continued to work pro bono for the organization on the Chinese website in 2010. Then I went to China, and in January 2011, I was contacted by Reporters without Borders about the departure of the former head of the Asia Pacific Desk.

I did my first and last fact-finding mission in China, and went to Beijing. I met the artist Ai Weiwei in Beijing just a few weeks before he was arrested, around March 2011, and finally flew back to Paris and started heading the Asia desk. I left the organization in 2017. 

JY: Such an amazing story. I’d love to hear more about the work that you do now.

 BI: The story of the work I do now actually starts in 2011. That was my first year heading the Asia Pacific Desk. That same year, a group of anonymous activists based in China started a group called GreatFire. They launched their project with a website where you can type any URL you want and that website will test the connection from mainland China to that URL and tell you know if it’s accessible or blocked. They also kept the test records so that you can look at the history of the blocking of a specific website, which is great. That was GreatFire’s first project for monitoring web censorship in mainland China.

We started exchanging information, working on the issue of censorship in China. They continued to develop more projects which I tried to highlight as well. I also helped them to secure some funding. At the very beginning, they were working on these things as a side job. And progressively they managed to get some funding, which was very difficult because of the anonymity.

One of the things I remember is that I helped them get some funding from the EU through a mechanism called “Small Grants”, where every grant would be around €20- 30,000. The EU, you know, is a bureaucratic entity and they were demanding some paperwork and documents. But I was telling them that they wouldn’t be able to get the real names of the people working at GreatFire, but that they should not be concerned about that because, what they wanted was to finance that tool. So if we were to show them that the people they were going to send the money to were actually the people controlling that website, then it would be fine. And so we featured a little EU logo just for one day, I think on the footer of the website so they could check that. And that’s how we convinced the EU to support GreatFire for that work. Also, there's this tactic called “Collateral Freedom” that GreatFire uses very well.

The idea is that you host sensitive content on HTTPS servers that belong to companies which also operate inside China and are accessible there. Because it’s HTTPS, the connection is encrypted, so the authorities can’t just block a specific page—they can’t see exactly which page is being accessed. To block it, they’d have to block the entire service. Now, they can do that, but it comes at a higher political and economic cost, because it means disrupting access to other things hosted on that same service—like banks or major businesses. That’s why it’s called “collateral freedom”: you’re basically forcing the authorities to risk broader collateral damage if they want to censor your content.

When I was working for RSF, I proposed that we replicate that tactic on the 12th of March—that's the World Day against Cyber Censorship. We had the habit of publishing what we called the “enemies of the Internet” report, where we would highlight and update the situation on the countries which were carrying out the harshest repression online; countries like Iran, Turkmenistan, North Korea, and of course, China. I suggested in a team meeting: “what if we highlighted the good guys? Maybe we could highlight 10 exiled media and use collateral freedom to uncensor those. And so we did: some Iranian media, Egyptian media, Chinese media, Turkmen media were uncensored using mirrors hosted on https servers owned by big, and thus harder to block, companies...and that’s how we started to do collateral freedom and it continued to be an annual thing.

I also helped in my personal capacity, including after I left Reporters Without Borders. After I left RSF, I joined another NGO focusing on China, which I knew also from my time at RSF. I worked with that group for a year and a half; GreatFire contacted me to work on a website specifically. So here we are, at the beginning of 2020, they had just started this website called Applecensorship.com that allowed users to test availability of any app in any of Apple’s 175 App Stores worldwide They needed a better website—one that allowed advocacy content—for that tool.

The idea was to make a website useful for academics doing research, journalists investigating app store censorship and control and human rights NGOs, civil society organizations interested in the availability of any tools. Apple’s censorship in China started quickly after the company entered the Chinese market, in 2010.

In 2013, one of the projects by GreatFire which had been turned into an iOS app was removed by Apple 48 hours after its release on the App Store, at the demand of the Chinese authorities. That project was Free Weibo, which is a website which features censored posts from Weibo, the Chinese equivalent of Twitter—we crawl social media and detect censored posts and republish them on the site. In 2017 it was reported that Apple had removed all VPNs from the Chinese app store.

So between that episode in 2013, and the growing censorship of Apple in China (and in other places too) led to the creation of AppleCensorship in 2019. GreatFire asked me to work on that website. The transformation into an advocacy platform was successful. I then started working full time on that project, which has since evolved into the App Censorship Project, which includes another website, googlecensorship.org (offering features similar to Applecensorship.com but for the 224 Play Stores worldwide). In the meantime, I became the head of campaigns and advocacy, because of my background at RSF.  

 JY: I want to ask you, looking beyond China, what are some other places in the world that you're concerned about at the moment, whether on a professional basis, but also maybe just as a person. What are you seeing right now in terms of global trends around free expression that worry you?

BI: I think, like everyone else, that what we're seeing in Western democracies—in the US and even in Europe—is concerning. But I'm still more concerned about authoritarian regimes than about our democracies. Maybe it's a case of not learning my lesson or of naive optimism, but I'm still more concerned about China and Russia than I am about what I see in France, the UK, or the US.

There has been some recent reporting about China developing very advanced censorship and surveillance technologies and exporting them to other countries like Myanmar and Pakistan. What we’re seeing in Russia—I’m not an expert on that region, but we heard experts saying back in 2022 that Russia was trying to increase its censorship and control, but that it couldn’t become like China because China had exerted control over its internet from the very beginning: They removed Facebook back in 2009, then Google was pushed away by the authorities (and the market). And the Chinese authorities successfully filled the gaps left by the absence of those foreign Western companies.

Some researchers working on Russia were saying that it wasn’t really possible for Russia to do what China had done because it was unprepared and that China had engineered it for more than a decade. What we are seeing now is that Russia is close to being able to close its Internet, to close the country, to replace services by its own controlled ones. It’s not identical, but it’s also kind of replicating what China has been doing. And that’s a very sad observation to make.

 Beyond the digital, the issue of how far Putin is willing to go in escalating concerns. As a human being and an inhabitant of the European continent, I’m concerned by the ability of a country like Russia to isolate itself while waging a war. Russia is engaged in a real war and at the same time is able to completely digitally close down the country. Between that and the example of China exporting censorship, I’m not far from thinking that in ten or twenty years we’ll have a completely splintered internet.

JY: Do you feel like having a global perspective like this has changed or reshaped your views in any way?

BI: Yes, in the sense that when you start working with international organizations, and you start hearing about the world and how human rights are universal values, and you get to meet people and go to different countries, you really get to experience how universal those freedoms and aspirations are. When I worked RSF and lobbied governments to pass a good law or abolish a repressive one, or when I worked on a case of a jailed journalist or blogger, I got to talk to authorities and to hear weird justifications from certain governments (not mentioning any names but Myanmar and Vietnam) like “those populations are different from the French” and I would receive pushback that the ideas of freedoms I was describing were not applicable to their societies. It’s a bit destabilizing when you hear that for the first time. But as you gain experience, you can clearly explain why human rights are universal and why different populations shouldn’t be ruled differently when it comes to human rights.

Everyone wants to be free. This notion of “universality” is comforting because when you’re working for something universal, the argument is there. The freedoms you defend can’t be challenged in principle, because everyone wants them. If governments and authorities really listened to their people, they would hear them calling for those rights and freedoms.

Or that’s what I used to think. Now we hear this growing rhetoric that we (people from the West) are exporting democracy, that it’s a western value, and not a universal one. This discourse, notably developed by Xi Jinping in China, “Western democracy” as a new concept— is a complete fallacy. Democracy was invented in the West, but democracy is universal. Unfortunately, I now believe that, in the future, we will have to justify and argue much more strongly for the universality of concepts like democracy, human rights and fundamental freedoms. 

JY: Thank you so much for this insight. And now for our final question: Do you have a free speech hero?

BI: No.

JY: No? No heroes? An inspiration maybe.

BI: On the contrary, I’ve been disappointed so much by certain figures that were presented as human rights heroes…Like Aung San Suu Kyi during the Rohingya crisis, on which I worked when I was at RSF.

Myanmar officially recognizes 135 ethnic groups, but somehow this one additional ethnic minority (the Rohingya) is impossible for them to accept. It’s appalling. It’s weird to say, but some heroes are not really good people either. Being a hero is doing a heroic action, but people who do heroic actions can also do very bad things before or after, at a different level. They can be terrible persons, husbands or friends and be a “human rights” hero at the same time.

Some people really inspired me but they’re not public figures. They are freedom fighters, but they are not “heroes”. They remain in the shadows. I know their struggles; I see their determination, their conviction, and how their personal lives align with their role as freedom fighters. These are the people who truly inspire me.

Pages