EFF, ACLU Urge Appeals Court to Revive Challenge to Los Angeles’ Collection of Scooter Location Data
San Francisco—The Electronic Frontier Foundation and the ACLU of Northern and Southern California today asked a federal appeals court to reinstate a lawsuit they filed on behalf of electric scooter riders challenging the constitutionality of Los Angeles’ highly privacy-invasive collection of detailed trip data and real-time locations and routes of scooters used by thousands of residents each day.
The Los Angeles Department of Transportation (LADOT) collects from operators of dockless vehicles like Lyft, Bird, and Lime information about every single scooter trip taken within city limits. It uses software it developed to gather location data through Global Positioning System (GPS) trackers on scooters. The system doesn’t capture the identity of riders directly, but collects with precision riders’ location, routes, and destinations to within a few feet, which can easily be used to reveal the identities of riders.
A lower court erred in dismissing the case, EFF and the ACLU said in a brief filed today in the U.S. Circuit Court of Appeals for the Ninth Circuit. The court incorrectly determined that the practice, unprecedented in both its invasiveness and scope, didn’t violate the Fourth Amendment. The court also abused its discretion, failing to exercise its duty to credit the plaintiff’s allegations as true, by dismissing the case without allowing the riders to amend the lawsuit to fix defects in the original complaint, as federal rules require.
“Location data can reveal detailed, sensitive, and private information about riders, such as where they live, who they work for, who their friends are, and when they visit a doctor or attend political demonstrations,” said EFF Surveillance Litigation Director Jennifer Lynch. “The lower court turned a blind eye to Fourth Amendment principles. And it ignored Supreme Court rulings establishing that, even when location data like scooter riders’ GPS coordinates are automatically transmitted to operators, riders are still entitled to privacy over the information because of the sensitivity of location data.”
The city has never presented a justification for this dragnet collection of location data, including in this case, and has said it’s an “experiment” to develop policies for motorized scooter use. Yet the lower court decided on its own that the city needs the data and disregarded plaintiff Justin Sanchez’s statements that none of Los Angeles’ potential uses for the data necessitates collection of all riders’ granular and precise location information en masse.
“LADOT’s approach to regulating scooters is to collect as much location data as possible, and to ask questions later,” said Mohammad Tajsar, senior staff attorney at the ACLU of Southern California. “Instead of risking the civil rights of riders with this data grab, LADOT should get back to the basics: smart city planning, expanding poor and working people’s access to affordable transit, and tough regulation on the private sector.”
The lower court also incorrectly dismissed Sanchez’s claims that the data collection violates the California Electronic Communications Privacy Act (CalECPA), which prohibits the government from accessing electronic communications information without a warrant or other legal process. The court’s mangled and erroneous interpretation of CalECPA—that only courts that have issued or are in the process of issuing a warrant can decide whether the law is being violated—would, if allowed to stand, severely limit the ability of people subjected to warrantless collection of their data to ever sue the government.
“The Ninth Circuit should overturn dismissal of this case because the lower court made numerous errors in its handling of the lawsuit,” said Lynch. “The plaintiffs should be allowed to file an amended complaint and have a jury decide whether the city is violating riders’ privacy rights.”
Why should you care about data brokers? Reporting this week about a Substack publication outing a priest with location data from Grindr shows once again how easy it is for anyone to take advantage of data brokers’ stores to cause real harm.
This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers. The Norwegian Consumer Council singled it out in its 2020 "Out of Control" report, before the Norwegian Data Protection Authority fined Grindr earlier this year. At the time, it specifically warning that the app’s data-mining practices could put users at serious risk in places where homosexuality is illegal.
But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users’ consent.
Apps and data brokers claim they are only sharing so-called “anonymized” data. But that’s simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as “anonymous” location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. In this case, that includes gay bars and private residents.
Another piece of the puzzle is the ad ID, another so-called “anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of “identity resolution” companies can readily link ad IDs to real people at scale.
All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands. We’ve said it before and we’ll say it again: metadata matters.
That’s why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.
Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty
As the Council of Europe’s flawed cross border surveillance treaty moves through its final phases of approval, time is running out to ensure cross-border investigations occur with robust privacy and human rights safeguards in place. The innocuously named “Second Additional Protocol” to the Council of Europe’s (CoE) Cybercrime Convention seeks to set a new standard for law enforcement investigations—including those seeking access to user data—that cross international boundaries, and would grant a range of new international police powers.
But the treaty’s drafting process has been deeply flawed, with civil society groups, defense attorneys, and even data protection regulators largely sidelined. We are hoping that CoE's Parliamentary Committee (PACE), which is next in line to review the draft Protocol, will give us the opportunity to present and take our privacy and human rights concerns seriously as it formulates its opinion and recommendations before the CoE’s final body of approval, the Council of Ministers, decides the Protocol’s fate. According to the Terms of Reference for the preparation of the Draft Protocol, the Council of Ministers may consider inviting parties “other than member States of the Council of Europe to participate in this examination.”
The CoE relies on committees to generate the core draft of treaty texts. In this instance, the CoE’s Cybercrime Committee (T-CY) Plenary negotiated and drafted the Protocol’s text with the assistance of a drafting group consisting of representatives of State Parties. The process, however, has been fraught with problems. To begin with, T-CY’s Terms of Reference for the drafting process drove a lengthy, non-inclusive procedure that relied on closed sessions (Article 4.3 T-CY Rules of Procedures). While the Terms of Reference allow the T-CY to invite individual subject matter experts on an ad hoc basis, key voices such as data protection authorities, civil society experts, and criminal defense lawyers were mostly sidelined. Instead, the process has been largely commandeered by law enforcement, prosecutors and public safety officials (see here, and here).
Earlier in the process, in April 2018, EFF, CIPPIC, EDRI and 90 civil society organizations from across the globe requested the COE Secretariat General provide more transparency and meaningful civil society participation as the treaty was being negotiated and drafted—and not just during the CoE’s annual and somewhat exclusive Octopus Conferences. However, since T-CY began its consultation process in July 2018, input from external stakeholders has been limited to Octopus Conference participation and some written comments. Civil society organizations were not included in the plenary groups and subgroups where text development actually occurs, nor was our input meaningfully incorporated.
Compounding matters, the T-CY’s final online consultation, where the near final draft text of the Protocol was first presented to external stakeholders, only provided a 2.5 week window for input. The draft text included many new and complex provisions, including the Protocol’s core privacy safeguards, but excluded key elements such as the explanatory text that would normally accompany these safeguards. As was flagged by civil society, privacy regulators, and even by the CoE’s own data protection committee, two and a half weeks is not enough time to provide meaningful feedback on such a complex international treaty. More than anything, this short consultation window gave the impression that T-CY’s external consultations were truly performative in nature.
Despite these myriad shortcomings, the Council of Ministers (CoE’ final statutory decision-making body, comprising member States’ Foreign Affairs Ministers) responded to our process concerns arguing that external stakeholders had been consulted during the Protocol’s drafting process. Even more oddly, the Council of Ministers’ justified the demonstrably curtailed final consultation period by invoking its desire to complete the Protocol on the 20th anniversary of the CoE’s Budapest Cybercrime Convention (that is, by this November 2021).
With great respect, we kindly disagree with Ministers’ response. If T-CY wished to meet its November 2021 deadline, it had many options open to it. For instance, it could have included external stakeholders from civil society and from privacy regulators in its drafting process, as it had been urged to do on multiple occasions.
More importantly, this is a complex treaty with wide ranging implications for privacy and human rights in countries across the world. It is important to get it right, and ensure that concerns from civil society and privacy regulators are taken seriously and directly incorporated into the text. Unfortunately, as the text stands, it raises many substantive problems, including the lack of systematic judicial oversight in cross-border investigations and the adoption of intrusive identification powers that pose a direct threat to online anonymity. The Protocol also undermines key data protection safeguards relating to data transfers housed in central instruments like the European Union’s Law Enforcement Directive and the General Data Protection Regulation.
The Protocol now stands with CoE’s PACE, which will issue an opinion on the Protocol and might recommend some additional changes to its substantive elements. It will then fall to CoE’s Council of Ministers to decide whether to accept any of PACE’s recommendations and adopt the Protocol, a step which we still anticipate will occur in November. Together with CIPPIC, EDRI, Derechos Digitales and NGOs around the world hope that PACE takes our concerns seriously, and that the Council produces a treaty that puts privacy and human rights first.
As part of a larger redesign, the payment app Venmo has discontinued its public “global” feed. That means the Venmo app will no longer show you strangers’ transactions—or show strangers your transactions—all in one place. This is a big step in the right direction. But, as the redesigned app rolls out to users over the next few weeks, it’s unclear what Venmo’s defaults will be going forward. If Venmo and parent company PayPal are taking privacy seriously, the app should make privacy the default, not just an option still buried in the settings.
Currently, all transactions and friends lists on Venmo are public by default, painting a detailed picture of who you live with, where you like to hang out, who you date, and where you do business. It doesn’t take much imagination to come up with all the ways this could cause harm to real users, and the gallery of Venmo privacy horrors is well-documented at this point.
However, Venmo apparently has no plans to make transactions private by default at this point. That would squander the opportunity it has right now to finally be responsive to the concerns of Venmo users, journalists, and advocates like EFF and Mozilla. We hope Venmo reconsiders.
There’s nothing “social” about sharing your credit card statement with your friends.
Even a seemingly positive move from “public” to “friends-only” defaults would maintain much of Venmo’s privacy-invasive status quo. That’s in large part because of Venmo’s track record of aggressively hoovering up users’ phone contacts and Facebook friends to populate their Venmo friends lists. Venmo’s installation process nudges users towards connecting their phone contacts and Facebook friends to Venmo. From there, the auto-syncing can continue silently and persistently, stuffing your Venmo friends list with people you did not affirmatively choose to connect with on the app. In some cases, there is no option to turn this auto-syncing off. There’s nothing “social” about sharing your credit card statement with a random subset of your phone contacts and Facebook friends, and Venmo should not make that kind of disclosure the default.
It’s also unclear if Venmo will continue to offer a “public” setting now that the global feed is gone. Public settings would still expose users’ activities on their individual profile pages and on Venmo’s public API, leaving them vulnerable to the kind of targeted snooping that Venmo has become infamous for.
We were pleased to see Venmo recently take the positive step of giving users settings to hide their friends lists. Throwing out the creepy global feed is another positive step. Venmo still has time to make transactions and friends lists private by default, and we hope it makes the right choice.
If you haven’t already, change your transaction and friends list settings to private by following the steps in this post.
On June 17th, the best legal minds in the Bay Area gathered together for a night filled with tech law trivia—but there was a twist! With in-person events still on the horizon, EFF's 13th Annual Cyberlaw Trivia Night moved to a new browser-based virtual space, custom built in Gather. This 2D environment allowed guests to interact with other participants using video, audio, and text chat, based on proximity in the room.
EFF's staff joined forces to craft the questions, pulling details from the rich canon of privacy, free speech, and intellectual property law to create four rounds of trivia for this year's seven competing teams.
As the evening began, contestants explored the virtual space and caught-up with each-other, but the time for trivia would soon be at hand! After welcoming everyone to the event, our intrepid Quiz Master Kurt Opsahl introduced our judges Cindy Cohn, Sophia Cope, and Mukund Rathi. Attendees were then asked to meet at their team's private table, allowing them to freely discuss answers without other teams being able to overhear, and so the trivia began!
Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters.
Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters. For the Intellectual Property Round 2, the questions proved more challenging, but the teams quickly rallied for the Privacy & Free Speech Round 3. With no clear winners so far, teams entered the final 4th round hoping to break away from the pack and secure 1st place.
But a clean win was not to be!
Durie Tangri's team "The Wrath of (Lina) Khan" and Fenwick's team "The NFTs: Notorious Fenwick Trivia" were still tied for first! Always prepared for such an occurrence, the teams headed into a bonus Tie-Breaker round to settle the score. Or so we thought...
After extensive deliberation, the judges arrived at their decision and announced "The Wrath of (Lina) Khan" had the closest to correct answer and were the 1st place winners, with the "The NFTs: Notorious Fenwick Trivia" coming in 2nd, and Ridder, Costa & Johnstone's team "We Invented Email" coming in 3rd. Easy, right? No!
Fenwick appealed to the judges, arguing that under Official "Price is Right" Rules, that the answer closest to correct without going over should receive the tie-breaker point: cue more extensive deliberation (lawyers). Turns out...they had a pretty good point. Motion for Reconsideration: Granted!
But what to do when the winners had already been announced?
Two first place winners, of course! Which also meant that Ridder, Costa & Johnstone's team "We Invented Email" moved into the 2nd place spot, and Facebook's team "Whatsapp" were the new 3rd place winners! Whew! Big congratulations to both winners, enjoy your bragging rights!
EFF's legal interns also joined in the fun, and their team name "EFF the Bluebook" followed the proud tradition of having an amazing team name, despite The Rules stating they were unable to formally compete.
EFF hosts the Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for their users. Among the many firms that continue to dedicate their time, talent, and resources to the cause, we would especially like to thank Durie Tangri LLP; Fenwick; Ridder, Costa & Johnstone LLP; and Wilson Sonsini Goodrich & Rosati LLP for sponsoring this year’s Bay Area event.
If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist. Interested lawyers reading this post can go here to join the Cooperating Attorneys list.
Are you interested in attending or sponsoring an upcoming Trivia Night? Please email email@example.com for more information.
The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”) pose huge problems for free expression and Internet users’ privacy. They include dangerous requirements for platforms to identify the origins of messages and pre-screen content, which fundamentally breaks strong encryption for messaging tools. Though WhatsApp and others are challenging the rules in court, the 2021 Rules have already gone into effect.
Three UN Special Rapporteurs—the Rapporteurs for Freedom of Expression, Privacy, and Association—heard and in large part affirmed civil society’s criticism of the 2021 Rules, acknowledging that they did “not conform with international human rights norms.” Indeed, the Rapporteurs raised serious concerns that Rule 4 of the guidelines may compromise the right to privacy of every internet user, and called on the Indian government to carry out a detailed review of the Rules and to consult with all relevant stakeholders, including NGOs specializing in privacy and freedom of expression.
2021 Rules contain two provisions that are particularly pernicious: the Rule 4(4) Content Filtering Mandate and the Rule 4(2) Traceability Mandate.Content Filtering Mandate
Rule 4(4) compels content filtering, requiring that providers are able to review the content of communications, which not only fundamentally breaks end-to-end encryption, but creates a system for censorship. Significant social media intermediaries (i.e. Facebook, WhatsApp, Twitter, etc.) must “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. This cannot be done without breaking the higher-level promises of secure end-to-end encrypted messaging.
Client-side scanning has been proposed as a way to enforce content blocking without technically breaking end-to-end encryption. That is, the user’s own device could use its knowledge of the unencrypted content to enforce restrictions by refusing to transmit, or perhaps to display, certain prohibited information, without revealing to the service provider who was attempting to communicate or view that information. That’s wrong. Client side-scanning requires a robot-spy in the room. A spy in a place where people are talking privately makes it not a private conversation. If that spy is a robot-spy like with client-side scanning, it is still a spy just as much as if it were a human spy.
As we explained last year, client-side scanning inherently breaks the higher-level promises of secure end-to-end encrypted communications. If the provider controls what's in the set of banned materials, they can test against individual statements, so a test against a set of size 1, in practice, is the same as being able to decrypt a message. And with client-side scanning, there's no way for users, researchers, or civil society to audit the contents of the banned materials list.
The Indian government frames the mandate as directed toward terrorism, obscenity, and the scourge of child sexual abuse material, but the mandate is acutally much broader. It also imposes proactive and automatic enforcement of the 2021 Rule’s Section (3)1(d)’s content takedown provisions requiring the proactive blocking of material previously held to be “information which is prohibited under any law,” including specifically laws for the protection of “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation,” and incitement to any such act. This includes the widely criticized Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media.
This broad mandate is all that is necessary to automatically suppress dissent, protest, and political activity that a government does not like, before it can even be transmitted. The Indian government's response to the Rapporteurs dismisses this concern, writing “India's democratic credentials are well recognized. The right to freedom of speech and expression is guaranteed under the Indian Constitution.”
The response misses the point. Even if a democratic state applies this incredible power to preemptively suppress expression only rarely and within the bounds of internationally recognized rights to freedom of expression, Rule(4)4 puts in place the tool kit for an authoritarian crackdown, automatically enforced not only in public discourse, but even in private messages between two people.
Part of a commitment to human rights in a democracy requires civic hygiene, refusing to create the tools of undemocratic power.
Moreover, rules like these give comfort and credence to authoritarian efforts to enlist intermediaries to assist in their crackdowns. If this Rule were available to China, word for word, it could be used to require social media companies to block images of Winnie the Pooh as it happened in China from being transmitted, even in direct “encrypted” messages.
Automated filters also violate due process, reversing the burden of censorship. As the three UN Special Rapporteurs made clear, a
general monitoring obligation that will lead to monitoring and filtering of user-generated content at the point of upload ... would enable the blocking of content without any form of due process even before it is published, reversing the well-established presumption that States, not individuals, bear the burden of justifying restrictions on freedom of expression.Traceability Mandate
The traceability provision, in Rule 4(2), requires any large social media intermediary that provides messaging services to “enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. The Decryption Rules allow authorities to request the interception or monitoring of any decrypted information generated, transmitted, received, or stored in any computer resource..
The Indian government responded to the Rapporteur report, claiming to honor the right to privacy:
“The Government of India fully recognises and respects the right of privacy, as pronounced by the Supreme Court of India in K.S. Puttaswamy case. Privacy is the core element of an individual's existence and, in light of this, the new IT Rules seeks information only on a message that is already in circulation that resulted in an offence.
This narrow view of Rule (4)4 is fundamentally mistaken. Implementing the Rule requires the messaging service to collect information about all messages, even before the content is deemed a problem, allowing the government to conduct surveillance with a time machine. This changes the security model and prevents implementing strong encryption that is a fundamental backstop to protecting human rights in the digital age.The Danger to Encryption
Both the traceability and filtering mandates endanger encryption, calling for companies to know detailed information about each message that their encryption and security designs would otherwise allow users to keep private. Strong end-to-end encryption means that only the sender and the intended recipient know the content of communications between them. Even if the provider only compares two encrypted messages to see if they match, without directly examining the content, this reduces security by allowing more opportunities to guess at the content.
It is no accident that the 2021 Rules are attacking encryption. Riana Pfefferkorn, Research Scholar at the Stanford Internet Observatory, wrote that the rules were intentionally aimed at end-to-end encryption since the government would insist on software changes to defeat encryption protections:
Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said.
The 2021 Rules’ unstated requirement to break encryption goes beyond the mandate of the Information Technology (IT) Act, which authorized the 2021 Rules. India’s Centre for Internet & Society’s detailed legal and constitutional analysis of the Rules explains: “There is nothing in Section 79 of the IT Act to suggest that the legislature intended to empower the Government to mandate changes to the technical architecture of services, or undermine user privacy.” Both are required to comply with the Rules.
There are better solutions. For example, WhatsApp found a way to discourage massive chain forwarding of messages without itself knowing the content. It has the app note the number of times a message has been forwarded inside the message itself so that the app can then change its behavior based on this. Since the forwarding count is inside the encrypted message, the WhatsApp server and company don’t see it. So your app might not let you forward a chain letter, because the letter’s content shows it was massively forwarded, but the company can’t look at the encrypted message and know it's content.
Likewise, empowering users to report content can mitigate many of the harms that inspired the Indian 2021 Rules. The key principle of end-to-end encryption is that a message gets securely to its destination, without interception by eavesdroppers. This does not prevent the recipient from reporting abusive or unlawful messages, including now-decrypted content and the sender’s information. An intermediary may be able to facilitate user reporting, and still be able to provide the strong encryption necessary for a free society. Furthermore, there are cryptographic techniques for a user to report abuse in a way that identifies the abusive or unlawful content without the possibility of forging a complaint and preserving the privacy of those people not directly involved.
The 2021 Rules endanger encryption, weakening the privacy and security of ordinary people throughout India, while creating tools which could all too easily be misused against fundamental human rights, and which can give inspiration for authoritarian regimes throughout the world. The Rules should be withdrawn, reviewed and reconsidered, bringing the voices of civil society and advocates for international human rights, to ensure the Rules help protect and preserve fundamental rights in the digital age.
Years ago, we noted that despite being one of the world’s largest economies, the state of California had no broadband plan for universal, affordable, high-speed access. It is clear that access that meets our needs requires fiber optic infrastructure, yet most Californians were stuck with slow broadband monopolies due to laws supported by the cable monopolies providing us with terrible service. For example, the state was literally putting obsolete copper DSL internet connections instead of building out fiber optics to rural communities under a state law large private ISPs supported in 2017. But all of that is finally coming to an end thanks to your efforts.
Today, Governor Newsom signed into law one of the largest state investments in public fiber in the history of the United States. No longer will the state of California simply defer to the whims of AT&T and cable for broadband access, now every community is being given their shot to choose their broadband destiny.How Did We Get a New Law?
California’s new broadband infrastructure program was made possible through a combination of
persistent statewide activism from all corners, political leadership by people such as Senator Lena Gonzalez, and investment funding from the American Rescue Plan passed by Congress. All of these things were part of what led up to the moment when Governor Newsom introduced his multi-billion broadband budget that is being signed into law today. Make no mistake, every single time you picked up the phone or emailed to tell your legislator to vote for affordable, high-speed access to all people, it made a difference because it set the stage for today.
Arguably, what pushed us to this moment was the image of kids doing homework in fast-food parking lots during the pandemic. It made it undeniable that internet access was neither universal nor adequate in speed and capacity. That moment, captured and highlighted by Monterey County Supervisor Luis Alejo, a former member of the Sacramento Assembly, forced a reckoning with the failures of the current broadband ecosystem. Coupled with the COVID-19 pandemic also forcing schools to burn countless millions of public dollars renting out inferior mobile hotspots, Sacramento finally had enough and voted unanimously to change course.What is California’s New Broadband Infrastructure Program and Why is it a Revolution?
California’s new broadband program approaches the problem on multiple fronts. It empowers local public entities, local private actors, and the state government itself to be the source of the solution. The state government will build open-access fiber capacity to all corners of the state. This will ensure that every community has multi-gigabit capacity available to suit their current and future broadband needs. Low-interest financing under the state’s new $750 million “Loan Loss Reserve” program will enable municipalities and county governments to issue broadband bonds to finance their own fiber. An additional $2 billion is available in grants for unserved pockets of the state for private and public applicants.
The combination of these three programs provides solutions that were off the table before the governor signed this law. For example, a rural community can finance a portion of their own fiber network with low-interest loans and bonds, seek grants for the most expensive unserved pockets, and connect with the state’s own fiber network at affordable prices. In a major city, a small private ISP or local school district can apply for a grant to provide broadband to an unserved low-income neighborhood. Even in high-tech cities such as San Francisco, an estimated 100,000 residents lack broadband access in low-income areas, proving that access is a widespread, systemic problem, not just a rural one, that requires an all hands on deck approach.
The revolution here is the fact that the law does not rely on AT&T, Frontier Communications, Comcast, and Charter to solve the digital divide. Quite simply, the program makes very little of the total $6 billion budget available to these large private ISPs who have already received so much money and still failed to deliver a solution. This is an essential first step towards reaching near universal fiber access, because it was never ever going to happen through the large private ISPs who are tethered to fast profits and short term investor expectations that prevent them from pursuing universal fiber access. What the state needed was to empower local partners in the communities themselves who will take on the long-term infrastructure challenge.
If you live in California, now is the time to talk to your mayor and city council about your future broadband needs. Now is the time to talk to your local small businesses about the future the state has enabled if they need to improve their broadband connectivity. Now is the time to talk to your school district about what they can do to improve community infrastructure for local students. Maybe you yourself have the will and desire to build your own local broadband network through this law.
All of these things are now possible because for the first time in state history there is a law in place that lets you decide the broadband future.
Pegasus Project Shows the Need for Real Device Security, Accountability and Redress for those Facing State-Sponsored Malware
People all around the world deserve the right to have a private conversation. Communication privacy is a human right, a civil liberty and one of the centerpieces of a free society. And while we all deserve basic communications privacy, the journalists, NGO workers and human rights and democracy activists among us are especially at risk, since they are often at odds with powerful governments.
So it is no surprise that people around the world are angry to learn that surveillance software sold by NSO Group to governments has been found on cellphones worldwide. Thousands of NGOs, human rights and democracy activists, along with government employees and many others have been targeted and spied upon. We agree and we are thankful for the work done by Amnesty International, the countless journalists at Forbidden Stories, along with Citizen Lab, to bring this awful situation to light.
"A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity."
Like many others, EFF has warned for years of the danger of the misuse of powerful state-sponsored malware. Yet the stories just keep coming about malware being used to surveil and track journalists and human rights defenders who are then murdered —including the murders of Jamal Khashoggi or Cecilio Pineda-Birto. Yet we have failed to ensure real accountability for the governments and companies responsible.
What can be done to prevent this? How do we create accountability and ensure redress? It’s heartening that both South Africa and Germany have recently banned dragnet communications surveillance, in part because there was no way to protect the essential private communications of journalists and privileged communications of lawyers. All of us deserve privacy, but lawyers, journalists and human rights defenders are at special risk because of their often adversarial relationship with powerful governments. Of course, the dual-use nature of targeted surveillance like the malware that NSO sells is trickier, since it is allowable under human rights law when it is deployed under proper “necessary and proportionate” limits. But that doesn’t mean we are helpless. In fact, we have suggestions on both prevention and accountability.
First, and beyond question, we need real device security. While all software can be buggy and malware often takes advantage of those bugs, we can do much better. To do better, we need the full support of our governments. It’s just shameful that in 2021 the U.S. government as well as many foreign governments in the Five Eyes and elsewhere are more interested in their own easy, surreptitious access to our devices than they are in the actual security of our devices. A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity. By this measure, the countries of the world, especially those who view themselves as leaders in cybersecurity, are currently failing.
It now seems painfully obvious that we need international cooperation in support of strong encryption and device security. Countries should be holding themselves and each other to account when they pressure device manufacturers to dumb down or back door our devices and when they hoard zero days and other attacks rather than ensuring that those security holes are promptly fixed. We also need governments to hold each other to the “necessary and proportionate” requirement of international human rights law for evaluating surveillance and these limits must apply whether that surveillance is done for law enforcement or national security purposes. And the US, EU, and others must put diplomatic pressure on the countries where these immoral spyware companies are are headquartered in to stop selling hacking gear to countries who use them to commit human rights abuses. At this point, many of these companies -- Cellebrite, NSO Group, and Candiru/Saitu—are headquartered in Israel and it’s time that both governments and civil society focus attention there.
Second, we can create real accountability by bringing laws and remedies around the world up to date to ensure that those impacted by state-sponsored malware have the ability to bring suit or otherwise obtain a remedy. Those who have been spied upon must be able to get redress from both the governments who do the illegal spying and the companies that knowingly provide them with the specific tools to do so. The companies whose good name are tarnished by this malware deserve to be able to stop it too. EFF has supported all of these efforts, but more is needed. Specifically:
We supported WhatsApp’s litigation against NSO Group to stop it from spoofing WhatsApp as a strategy for infecting unsuspecting victims. The Ninth Circuit is currently considering NSO’s appeal.
We sought direct accountability for foreign governments who spy on Americans in the U.S. in Kidane v. Ethiopia. We argued that foreign countries who install malware on Americans’ devices should be held to account, just as the U.S. government would be if it violated the Wiretap Act or any of the other many applicable laws. We were stymied by a cramped reading of the law in the D.C. Circuit -- the court wrongly decided that the fact that the malware was sent from Ethiopia rather than from inside the U.S. triggered sovereign immunity. That dangerous ruling should be corrected by other courts or Congress should clarify that foreign governments don’t have a free pass to spy on people in America. NSO Group says that U.S. telephone numbers (that start with +1) are not allowed to be tracked by its service, but Americans can and do have foreign-based telephones and regardless, everyone in the world deserves human rights and redress. Countries around the world should step up to make sure their laws cover state sponsored malware attacks that occur in their jurisdiction.
We also have supported those who are seeking accountability from companies directly, including the Chinese religious minority who have been targeted using a specially-built part of the Great Firewall of China created by American tech giant Cisco.
"The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies."
Third, we must increase the pressure on these companies to make sure they are not selling to repressive regimes and continue naming and shaming those that do. EFF’s Know Your Customer framework is a good place to start, as was the State Department’s draft guidance (that apparently was never finalized). And these promises must have real teeth. Apparently we were right in 2019 that NSO Group’s unenforceable announcement that it was holding itself to the “highest standards of ethical business,” was largely a toothless public relations move. Yet while NSO is rightfully on the hot seat now, they are not the only player in this immoral market. Companies who sell dangerous equipment of all kinds must take steps to understand and limit misuse and these surveillance. Malware tools used by governments are no different.
Fourth, we support former United Nations Special Rapporteur for Freedom of Expression David Kaye in calling for a moratorium on the governmental use of these malware technologies. While this is a longshot, we agree that the long history of misuse, and the growing list of resulting extrajudicial killings of journalists and human rights defenders, along with other human rights abuses, justifies a full moratorium.
These are just the start of possible remedies and accountability strategies. Other approaches may be reasonable too, but each must recognize that, at least right now, the intelligence and law enforcement communities of many countries are not defining “cybersecurity” to include actually protecting us, much less the journalists and NGOs and activists that do the risky work to keep us informed and protect our rights. We also have to understand that unless done carefully, regulatory responses like further triggering U.S. export restrictions could result in less security for the rest of us while not really addressing the problem. The NSO Group was reportedly able to sell to the Saudi regime with the permission and encouragement of the Israeli government under that country’s export regime. The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies.
Until governments around the world get out of the way and actually support security for all of us, including accountability and redress for victims, these outrages will continue. Governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous for their own citizens because a device cannot tell if the malware infecting it is from the good guys or the bad guys. This fact is just not going to go away.
We must have strong security at the start, and strong accountability after the fact if we want to get to a world where all of us can enjoy communications security. Only then will our journalists, human rights defenders and NGOs be able to do their work without fear of being tracked, watched and potentially murdered simply because they use a mobile device.
We’ve added one more day to EFF's summer membership drive! Over 900 supporters have answered the call to get the internet right by defending privacy, free speech, and innovation. It’s possible if you’re with us. Will you join EFF?
Through Wednesday, anyone can join EFF or renew their membership for as little as $20 and get a pack of issue-focused Digital Freedom Analog Postcards. Each one represents part of the fight for our digital future, from releasing free expression chokepoints to opposing biometric surveillance to compelling officials to be more transparent. We made this special-edition snail mail set to further connect you with friends or family, and to help boost the signal for a better future online—it's a team effort!
New and renewing members at the Copper level and above can also choose our Stay Golden t-shirt. It highlights your resilience through darkness and our power when we work together. And it's pretty darn fashionable, too.
Analog or digital—what matters is connection. Technology has undeniably become a significant piece of nearly all our communications, whether we are paying bills, working, accessing healthcare, or talking to loved ones. These familiar things require advanced security protocols, unrestricted access to an open web, and vigilant public advocacy. So if the internet is a portal to modern life, then our tech must also embrace civil liberties and human rights.Boost the Signal & Free the Tubes
Why do you support internet freedom? You can advocate for a better online future just by connecting with the people around you. Here’s some sample language you can share with your circles:Staying connected has never been more important. Help me support EFF and the fight for every tech users’ right to privacy, free speech, and digital access. https://eff.org/greetings
Twitter | Facebook | Email
It’s up to all of us to strengthen the best parts of the internet and create the future we want to live in. With people now coming of age only knowing a world connected to the web, EFF is using its decades of expertise in law and technology to stand up for the rights and freedoms that sustain modern democracy. Thank you for being part of this important work.
Support Online Rights For All
In an amicus brief filed Friday, EFF and the Internet Archive argued to the Ninth Circuit Court of Appeals that the Supreme Court’s recent decision in Van Buren v. United States shows that the federal computer crime law does not criminalize the common and useful practice of scraping publicly available information on the internet.
The case, hiQ Labs, Inc. v. LinkedIn Corp., began when LinkedIn attempted to stop its competitor, hiQ Labs, from scraping publicly available data posted by users of LinkedIn. hiQ Labs sued and, on appeal, the Ninth Circuit held that the Computer Fraud and Abuse Act (CFAA) does not prohibit this scraping.
LinkedIn asked the Supreme Court to reverse the decision. Instead, the high court sent the case back to the Ninth Circuit and asked it to take a second look, this time with the benefit of Van Buren.
Our brief points out that Van Buren instructed lower courts to use the “technical meanings” of the CFAA’s terms—not property law or generic, non-technical definitions. It’s a computer crime statute, after all. The CFAA prohibits accessing a computer “without authorization”—from a technical standpoint, that presumes there is an authorization system like a password requirement or other authentication stage.
But when any of the billions of internet users access any of the hundreds of millions of public websites, they do not risk violating federal law. There is no authentication stage between the user and the public website, so “without authorization” is an inapt concept. Van Buren used a “gates-up-or-down” analogy, and for a publicly available website, there is no gate to begin with—or at the very least, the gate is up. Our brief explains that neither LinkedIn’s cease-and-desist letter to hiQ nor its attempts to block its competitor’s IP addresses are the kind of technological access barrier required to invoke the CFAA.
Lastly, our brief acknowledges LinkedIn’s concerns about how unbridled scraping may harm privacy online and invites the company to join growing advocacy efforts to adopt consumer and biometric privacy laws. These laws will directly address the collection of people’s sensitive information without their consent and won’t criminalize legitimate activity online.Related Cases: hiQ v. LinkedIn
Claiming that “right-wing voices are being censored,” Republican-led legislatures in Florida and Texas have introduced legislation to “end Big Tech censorship.” They say that the dominant tech platforms block legitimate speech without ever articulating their moderation policies, that they are slow to admit their mistakes, and that there is no meaningful due process for people who think the platforms got it wrong.
They’re right.So is everyone else
But it’s not just conservatives who have their political speech blocked by social media giants. It’s Palestinians and other critics of Israel, including many Israelis. And it’s queer people, of course. We have a whole project tracking people who’ve been censored, blocked, downranked, suspended and terminated for their legitimate speech, from punk musicians to peanuts fans, historians to war crimes investigators, sex educators to Christian ministries.The goat-rodeo
Content moderation is hard at any scale, but even so, the catalog of big platforms’ unforced errors makes for sorry reading. Experts who care about political diversity, harassment and inclusion came together in 2018 to draft the Santa Clara Principles on Transparency and Accountability in Content Moderation but the biggest platforms are still just winging it for the most part.
The Florida and Texas social media laws are deeply misguided and nakedly unconstitutional, but we get why people are fed up with Big Tech’s ongoing goat-rodeo of content moderation gaffes.So what can we do about it?
Let’s start with talking about why platform censorship matters. In theory, if you don’t like the moderation policies at Facebook, you can quit and go to a rival, or start your own. In practice, it’s not that simple.
First of all, the internet’s “marketplace of ideas” is severely lopsided at the platform level, consisting of a single gargantuan service (Facebook), a handful of massive services (YouTube, Twitter, Reddit, TikTok, etc) and a constellation of plucky, struggling, endangered indieweb alternatives.
If none of the big platforms want you, you can try to strike out on your own. Setting up your own rival platform requires that you get cloud services, anti-DDoS, domain registration and DNS, payment processing and other essential infrastructure. Unfortunately, every one of these sectors has grown increasingly concentrated, and with just a handful of companies dominating every layer of the stack, there are plenty of weak links in the chain and if just one breaks, your service is at risk.
But even if you can set up your own service, you’ve still got a problem: everyone you want to talk about your disfavored ideas with is stuck in one of the Big Tech silos. Economists call this the “network effect,” when a service gets more valuable as more users join it. You join Facebook because your friends are there, and once you’re there, more of your friends join so they can talk to you.
Setting up your own service might get you a more nuanced and welcoming moderation environment, but it’s not going to do you much good if your people aren’t willing to give up access to all their friends, customers and communities by quitting Facebook and joining your nascent alternative, not least because there’s a limit to how many services you can be active on.Network effects
If all you think about is network effects, then you might be tempted to think that we’ve arrived at the end of history, and that the internet was doomed to be a winner-take-all world of five giant websites filled with screenshots of text from the other four.But not just network effects
But network effects aren’t the only idea from economics we need to pay attention to when it comes to the internet and free speech. Just as important is the idea of “switching costs,” the things you have to give up when you switch away from one of the big services - if you resign from Facebook, you lose access to everyone who isn’t willing to follow you to a better place.
Switching costs aren’t an inevitable feature of large communications systems. You can switch email providers and still connect with your friends; you can change cellular carriers without even having to tell your friends because you get to keep your phone number.
The high switching costs of Big Tech are there by design. Social media may make signing up as easy as a greased slide, but leaving is another story. It's like a roach motel: users check in but they’re not supposed to check out.Interop vs. switching costs
Enter interoperability, the practice of designing new technologies that connect to existing ones. Interoperability is why you can access any website with any browser, and read Microsoft Office files using free/open software like LibreOffice, cloud software like Google Office, or desktop software like Apple iWorks.
An interoperable social media giant - one that allowed new services to connect to it - would bust open that roach motel. If you could leave Facebook but continue to connect with the friends, communities and customers who stayed behind, the decision to leave would be much simpler. If you don’t like Facebook’s rules (and who does?) you could go somewhere else and still reach the people that matter to you, without having to convince them that it’s time to make a move.The ACCESS Act
That’s where laws like the proposed ACCESS Act come in. While not perfect, this proposal to force the Big Tech platforms to open up their walled gardens to privacy-respecting, consent-seeking third parties is a way forward for anyone who chafes against Big Tech’s moderation policies and their uneven, high-handed application.
Some tech platforms are already moving in that direction. Twitter says it wants to create an “app store for moderation,” with multiple services connecting to it, each offering different moderation options. We wish it well! Twitter is well-positioned to do this - it’s one tenth the size of Facebook and needs to find ways to grow.
But the biggest tech companies show no sign of voluntarily reducing their switching costs. The ACCESS Act is the most important interoperability proposal in the world, and it could be a game-changer for all internet users.Save Section 230, save the internet
Unfortunately for all of us, many of the people who don’t like Big Tech’s moderation think the way to fix it is to eliminate Section 230, a law that promotes users' free speech. Section 230 is a rule that says you sue the person who caused the harm while organizations that host expressive speech are free to remove offensive, harassing or otherwise objectionable content.
That means that conservative Twitter alternatives can delete floods of pornographic memes without being sued by their users. It means that online forums can allow survivors of workplace harassment to name their abusers without worrying about libel suits.
If hosting speech makes you liable for what your users say, then only the very biggest platforms can afford to operate, and then only by resorting to shoot-first/ask-questions-later automated takedown systems.Kumbaya
There’s not much that the political left and right agree on these days, but there’s one subject that reliably crosses the political divide: frustration with monopolists’ clumsy handling of online speech.
For the first time, there’s a law before Congress that could make Big Tech more accountable and give internet users more control over speech and moderation policies. The promise of the ACCESS Act is an internet where if you don’t like a big platform’s moderation policies, if you think they’re too tolerant of abusers or too quick to kick someone off for getting too passionate during a debate, you can leave, and still stay connected to the people who matter to you.
Killing CDA 230 won’t fix Big Tech (if that was the case, Mark Zuckerberg wouldn’t be calling for CDA 230 reform). The ACCESS Act won’t either, by itself -- but by making Big Tech open up to new services that are accountable to their users, the ACCESS Act takes several steps in the right direction.
“You can record all you want. I just know it can’t be posted to YouTube,” said an Alameda County sheriff’s deputy to an activist. “I am playing my music so that you can’t post on YouTube.” The tactic didn’t work—the video of his statement can in fact, as of this writing, be viewed on YouTube. But it’s still a shocking attempt to thwart activists’ First Amendment right to record the police—and a practical demonstration that cops understand what too many policymakers do not: copyright can offer an easy way to shut down lawful expression.
This isn’t the first time this year this has happened. It’s not even the first time in California this year. Filming police is an invaluable tool, for basically anyone interacting with them. It can provide accountability and evidence of what occurred outside of what an officer says occurred. Given this country’s longstanding tendency to believe police officers’ word over almost anyone else’s, video of an interaction can go a long way to getting to the truth.
Very often, police officers would prefer not to be recorded, but there’s not much they can do about that legally, given strong First Amendment protections for the right to record. But some officers are trying to get around this reality by making it harder to share recordings on many video platforms: they play music so that copyright filters will flag the video as potentially infringing. Copyright allows these cops to brute force their way past the First Amendment.
Large rightsholders—the major studios and record labels—and their lobbyists have done a very good job of divorcing copyright from debates about speech. The debate over the merits of the Digital Millennium Copyright Act (DMCA) is cast as “artists versus Big Tech.” But we must not forget that, at its core, copyright is a restriction on, as well as an engine for, expression.
Many try to cast the DMCA just as a tool to protect the rights of artists, since in theory it is meant to stop infringement. But the law is also a tool that makes it incredibly simple to remove lawful speech from the internet. The fair use doctrine ensures that copyright can exist in harmony with the First Amendment. But often, the debate gets wrapped up in who has the right to make a living doing what kind of art, and it becomes easy to forget how mechanisms to enforce copyright can actually restrict lawful speech.
Forgetting all of this serves the purpose of those who advocate for the broader use of copyright filters on the internet. And where those filters are voluntarily deployed by companies, they replace a fair use analysis. So a filter that automatically blocks a video for playing a few seconds of a song becomes a useful tool for police officers who do not want to be subject to video-based accountability. What’s the harm in automating the identification and removal of things that have copyrighted material in them? The harm is that you are often removing lawful speech.
It’s as easy to play a song out of your phone as it is to film with it. Easier, even. And copyright filters work by checking if something in an uploaded video matches any of the copyrighted material in its database. A few seconds of a certain song in the audio of a video could prevent that video from being uploaded. That’s the thing the cops in these stories are recognizing. And while it’s funny to see a cop playing Taylor Swift and claiming we can’t watch a video on YouTube that we are actually watching on YouTube, how many of these stories aren’t we hearing about? We know, without a doubt, that YouTube’s filter, Content ID, is very sensitive to music. And some singers and companies have YouTube’s filter set to automatically remove, rather than just demonetize, uploads with parts of their songs in them. Since YouTube is so dominant when it comes to video sharing, knowing how to game Content ID can be very effective in silencing others.
When a story like this gets press attention, the video at issue won’t disappear because everyone recognizes the importance of the speech at issue. Neither the platform nor the record label is going to take down the video of the cop playing Taylor Swift. But countless videos never make it past the filters, and so never get public attention. Many activists don’t know what to do about a copyright claim. They may not want to share their name and contact information, as is required for both DMCA counternotices and challenges to Content ID. Or, when faced with the labyrinthine structure of YouTube’s appeals system, they may just give up.
As the saying goes, we don’t know what we don’t know. Hopefully, these stories help others recognize and fight this devious tactic. If you have similar stories of police officers using this tactic, please let EFF know by emailing firstname.lastname@example.org.
It’s no longer science fiction or unreasonable paranoia. Now, it needs to be said: No, police must not be arming land-based robots or aerial drones. That’s true whether these mobile devices are remote controlled by a person or autonomously controlled by artificial intelligence, and whether the weapons are maximally lethal (like bullets) or less lethal (like tear gas).
Police currently deploy many different kinds of moving and task-performing technologies. These include flying drones, remote control bomb-defusing robots, and autonomous patrol robots. While these different devices serve different functions and operate differently, none of them--absolutely none of them--should be armed with any kind of weapon.
Mission creep is very real. Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime. For example, cell site simulators (often called “Stingrays”) were developed for use in foreign battlefields, brought home in the name of fighting “terrorism,” then used by law enforcement to catch immigrants and a man who stole $57 worth of food. Likewise, police have targeted BLM protesters with face surveillance and Amazon Ring doorbell cameras.
Today, scientists are developing an AI-enhanced autonomous drone, designed to find people during natural disasters by locating their screams. How long until police use this technology to find protesters shouting chants? What if these autonomous drones were armed? We need a clear red line now: no armed police drones, period.The Threat is Real
There are already law enforcement robots and drones of all shapes, sizes, and levels of autonomy patrolling the United States as we speak. From autonomous Knightscope robots prowling for “suspicious behavior” and collecting images of license plates and phone identifying information, to Boston Dynamic robotic dogs accompanying police on calls in New York or checking the temperature of unhoused people in Honolulu, to predator surveillance drones flying over BLM protests in Minneapolis.
We are moving quickly towards arming such robots and letting autonomous artificial intelligence determine whether or not to pull the trigger.
According to a Wired report earlier this year, the U.S. Defense Advanced Research Projects Agency (DARPA) in 2020 hosted a test of autonomous robots to see how quickly they could react in a combat simulation and how much human guidance they would need. News of this test comes only weeks after the federal government’s National Security Commission on Artificial Intelligence recommended the United States not sign international agreements banning autonomous weapons. “It is neither feasible nor currently in the interests of the United States,” asserts the report, “to pursue a global prohibition of AI-enabled and autonomous weapon systems.”
In 2020, the Turkish military deployed Kargu, a fully autonomous armed drone, to hunt down and attack Libyan battlefield adversaries. Autonomous armed drones have also been deployed (though not necessarily used to attack people) by the Turkish military in Syria, and by the Azerbaijani military in Armenia. While we have yet to see autonomous armed robots or drones deployed in a domestic law enforcement context, wartime tools used abroad often find their way home.
The U.S. government has become increasingly reliant on armed drones abroad. Many police departments seem to purchase every expensive new toy that hits the market. The Dallas police have already killed someone by strapping a bomb to a remote-controlled bomb-disarming robot.
So activists, politicians, and technologists need to step in now, before it is too late. We cannot allow a time lag between the development of this technology and the creation of policies to let police buy, deploy, or use armed robots. Rather, we must ban police from arming robots, whether in the air or on the ground, whether automated or remotely-controlled, whether lethal or less lethal, and in any other yet unimagined configuration.No Autonomous Armed Police Robots
Whether they’re armed with a taser, a gun, or pepper spray, autonomous robots would make split-second decisions about taking a life, or inflicting serious injury, based on a set of computer programs.
But police technologies malfunction all the time. For example, false positives are frequently generated by face recognition technology, audio gunshot detection, and automatic license plate readers. When this happens, the technology deploys armed police to a situation where they may not be needed, often leading to wrongful arrests and excessive force, especially against people of color erroneously identified as criminal suspects. If the malfunctioning police technology were armed and autonomous, that would create a far more dangerous situation for innocent civilians.
When, inevitably, a robot unjustifiably injures or kills someone--who would be held responsible? Holding police accountable for wrongfully killing civilians is already hard enough. In the case of a bad automated decision, who gets held responsible? The person who wrote the algorithm? The police department that deployed the robot?
Autonomous armed police robots might become one more way for police to skirt or redirect the blame for wrongdoing and avoid making any actual changes to how police function. Debate might bog down in whether to tweak the artificial intelligence guiding a killer robot’s decision making. Further, technology deployed by police is usually created and maintained by private corporations. A transparent investigation into a wrongful killing by an autonomous machine might be blocked by assertions of the company’s supposed need for trade secrecy in its proprietary technology, or by finger-pointing between police and the company. Meanwhile, nothing would be done to make people on the streets any safer.
MIT Professor and cofounder of the Future of Life Institute Max Tegmark told Wired that AI weapons should be “stigmatized and banned like biological weapons.” We agree. Although its mission is much more expensive than the concerns of this blog post, you can learn more about what activists have been doing around this issue by visiting the Campaign to Stop Killer Robots.
Even where police have remote control over armed drones and robots, the grave dangers to human rights are far too great. Police routinely over-deploy powerful new technologies in already over-policed Black, Latinx, and immigrant communities. Police also use them too often as part of the United State’s immigration enforcement regime, and to monitor protests and other First Amendment-protected activities. We can expect more of the same with any armed robots.
Moreover, armed police robots would probably increase the frequency of excessive force against suspects and bystanders. A police officer on the scene generally will have better information about unfolding dangers and opportunities to de-escalate, compared to an officer miles away looking at a laptop screen. Moreover, a remote officer might have less empathy for the human target of mechanical violence.
Further, hackers will inevitably try to commandeer armed police robots. They already have succeeded at taking control of police surveillance cameras. The last thing we need are foreign governments or organized criminals seizing command of armed police robots and aiming them at innocent people.
Armed police robots are especially menacing at protests. The capabilities of police to conduct crowd control by force are already too great. Just look at how the New York City Police Department has had to pay out hundreds of thousands of dollars to settle a civil lawsuit concerning police using a Long Range Acoustic Device (LRAD) punitively against protestors. Police must never deploy taser-equipped robots or pepper spray spewing drones against a crowd. Armed robots would discourage people from attending protests. We must de-militarize our police, not further militarize them.
We need a flat-out ban on armed police robots, even if their use might at first appear reasonable in uncommon circumstances. In Dallas in 2016, police strapped a bomb to an explosive-diffusing robot in order to kill a gunman hiding inside a parking garage who had already killed five police officers and shot seven others. Normalizing armed police robots poses too great a threat to the public to allow their use even in extenuating circumstances. Police have proven time and time again that technologies meant only for the most extreme circumstances inevitably become commonplace, even at protests.Conclusion
Whether controlled by an artificial intelligence or a remote human operator, armed police robots and drones pose an unacceptable threat to civilians. It’s exponentially harder to remove a technology from the hands of police than prevent it from being purchased and deployed in the first place. That’s why now is the time to push for legislation to ban police deployment of these technologies. The ongoing revolution in the field of robotics requires us to act now to prevent a new era of police violence.
The Tower of Babel: How Public Interest Internet is Trying to Save Messaging and Banish Big Social Media
This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our earlier installments.
How many messaging services do you use? Slack, Discord, WhatsApp, Apple iMessage, Signal, Facebook Messenger, Microsoft Teams, Instagram, TikTok, Google Hangouts, Twitter Direct Messages, Skype? Our families, friends and co-workers are scattered across dozens of services, none of which talk to each other. Without even trying, you can easily amass 40 apps on your phone that let you send and receive messages. The numbers aren't dropping.
Companies like Google and Facebook - who once supported interoperable protocols, even using the same chat protocol - now spurn them.
This isn’t the first time we’ve been in this situation. Back in the 2000s, users were asked to choose between MSN, AOL, ICQ, IRC and Yahoo! Messenger, many of which would be embedded in other, larger services. Programs like Pidgin and Adium collected your contacts in one place, and allowed end-users some independence from being locked in by one service - or worse, having to choose which friends you care enough about to join yet another messaging service.
So, the proliferation of messaging services isn’t new. What is new is the interoperability environment. Companies like Google and Facebook - who once supported interoperable protocols, even using the same chat protocol - now spurn them. Even upstarts like Signal try to dissuade developers from building their own, unofficial clients.
Finding a way to splice together all these services might make a lot of internet users happy, but it won’t thrill investors or tempt a giant tech company to buy your startup. The only form of recognition guaranteed to anyone who tries to untangle this knot is legal threats - lots of legal threats.
But that hasn't stopped the voluntary contributors of the wider, Public Interest Internet.
Take Matterbridge, an free/open software project that promises to link together "Discord, Gitter, IRC, Keybase, Matrix, Mattermost, MSTeams, Rocket.Chat, Slack, Telegram, Twitch, WhatsApp, XMPP, Zulip". This is a thankless task that requires its contributors to understand (and, at times, reverse-engineer) many protocols. It’s hard work, and it needs frequent updating as all these protocols change. But they're managing it, and providing the tools to do it for free.
Intriguingly, some of the folks working in this area are the same ones who dedicated themselves to wiring together different messenger services in the 2000s, and they’re still plugging away at it. You can watch one of Pidgin's lead developers live-coding on Twitch, repurposing the codebase for a new age.
Pidgin was able to survive for a long time in the wilderness, thanks to institutional support from "Instant Messaging Freedom,” a non-profit that manages its limited finances, and makes sure that even if the going is slow, it never stops. IMF was started in the mid-2000s after AOL threatened the developers of Pidgin, then called GAIM. Initially intended as a legal defense organization, it stuck around to serve as a base for the service operations.
We asked Pidgin’s Gary Kramlich about his devotion to the project. Kramlich quit his job in 2019 and lived off his savings while undertaking a serious refactoring of Pidgin’s code, something he plans to keep up until September when he will run out of money and have to return to paid work.
“It's all about communication and bringing people together, allowing them to talk on their terms. That's huge. You shouldn't need to have 30GB of RAM to run all your chat clients. Communications run on network effects. If the majority of your friends use a tool and you don’t like it, your friends will have to take an extra step to include you in the conversation. That forces people to choose between their friends and the tools that suit them best. A multi-protocol client like Pidgin means you can have both.”
Many public interest internet projects reflect this pattern: spending years working in relative obscurity on topics that require concentrated work, but with little immediate reward, under a cloud of legal risk that scares off commercial ventures. This kind of work is, by definition, work for the public good.
After years of slow, patient, unglamorous work, the moment that Pidgin, Matterbridge and others laid the groundwork for has arrived. Internet users are frustrated beyond the breaking point by the complexity of managing multiple chat and message services. Businesses are taking notice.
This is a legally risky bet, but it’s a shrewd one. After decades of increasing anti-interoperability legal restrictions, the law is changing for the better. In an attempt to break the lock-in of the big messaging providers, the U.S. Congress and the EU are considering compulsory interoperability laws that would make these developers' work far easier - and legally safer.
Interoperability is an idea whose time has home. Frustrated by pervasive tracking and invasive advertising, free software developers have built alternative front-ends to sites like YouTube, Instagram and Twitter. Coders are sick of waiting for the services they pay to use to add the features they need, so they’re building alternative clients for Spotify, and Reddit.
These tools are already accomplishing the goals that regulators have set for themselves as part of the project of taming Big Tech. The public interest internet is giving us tracking-free alternatives, interoperable services, and tools that put user needs and human thriving before "engagement" and "stickiness.”
Interoperability tools are more than a way to reskin or combine existing services - they’re also ways to create full-fledged alternatives to the incumbent social media giants. For example, Mastodon is a Twitter competitor built on an open protocol that lets millions of servers, and multiple, custom front-ends to interconnect with one-another (Peertube does the same for video).
These services are thriving, with a userbase in the seven digits, but they still struggle to convince the average creator or user on Facebook or YouTube to switch, thanks to the network effects these centralised services benefit from. A YouTube creator might hate the company’s high-handed moderation policies and unpredictable algorithmic recommendations, but they still use YouTube because that’s where all the viewers are. Every time a creator joins YouTube, they give viewers another reason to keep using YouTube. Every time a viewer watches something on YouTube, they give creators another reason to post their videos to YouTube.
With interoperable clients, those network effects are offset by lower “switching costs.” If you can merge your Twitter and Mastodon feeds into one Mastodon client, then it doesn’t matter if you’re a “Mastodon user” or a “Twitter user.” Indeed, if your Twitter friends can subscribe to your Mastodon posts, and if you can use Mastodon to read their Twitter posts, then you don’t lose anything by switching away from Twitter and going Mastodon-exclusive. In fact, you might gain by doing so, because your Mastodon server might have features, policies and communities that are better for you and your needs than Twitter’s - which has to satisfy hundreds of millions of use-cases - can ever be.
Indeed, it seems that Twitter's executives have already anticipated this future, with their support for BlueSky, an internal initiative to accelerate this interoperability so that they can be best placed to survive it.
Right now, at this very moment, there are hundreds, if not thousands, of developers, supporting millions of early adopters in building a vision of a post-Facebook world, constructed in the public interest.
Yet these projects are very rarely mentioned in policy circles, nor do they receive political or governmental support. They are never given consideration when new laws about intermediary liability, extremist or harmful content, or copyright are enacted. If a public institution ever considers them, it’s almost always the courts, as the maintainers of these projects struggle with legal uncertainty and bowel-looseningly terrifying lawyer-letters demanding that they stop pursuing the public good. If the political establishment really want to unravel big tech, they should be working with these volunteers, not ignoring or opposing them.
This is the fifth post in our blog series on the public interest internet. Read more in the series:
- Introducing the Public Interest Internet
- The Enclosure of the Public Interest Internet
- Outliving Outrage on the Public Interest Internet: the CDDB Story
- Organizing in the Public Interest: MusicBrainz
- The Tower of Babel: How Public Interest Internet is Trying to Save Messaging and Banish Big Social Media
Article 17 Copyright Directive: The Court of Justice’s Advocate General Rejects Fundamental Rights Challenge But Defends Users Against Overblocking
The Advocate General (AG) of the EU Court of Justice today missed an opportunity to fully protect internet users from censorship by automated filtering, finding that the disastrous Article 17 of the EU Copyright Directive doesn’t run afoul of Europeans’ free expression rights.
The good news is that the AG’s opinion, a non-binding recommendation for the EU Court of Justice, defends users against overblocking, warning social media platforms and other content hosts that they are not permitted to automatically block lawful speech. The opinion also rejects the idea that content hosts should be “turned into judges of online legality, responsible for coming to decisions on complex copyright issues.”
On its face, Article 17 would allow online platforms to be held liable for unlawful user content unless they act as copyright cops and bend over backwards to ensure infringing content is not available on their platforms. EFF has repeatedly stressed that such liability regimes will lead to upload filters, which are prone to error, unaffordable for all but the largest companies, and undermine fundamental rights of users. Simply put, people will be unable to freely speak and share opinions, criticisms, photos, videos, or art if they are subjected to a black box programmed by algorithms to make potentially harmful automated takedown decisions.
Today’s opinion, while milder than we had hoped, could help mitigate that risk. Briefly, the AG acknowledges that Article 17 interferes with users’ freedom of expression rights, as providers are required preventively to filter and block user content that unlawfully infringes copyrights. The AG found that users were not free to upload whatever content they wish—Article 17 had the “actual effect” of requiring platforms to filter their users’ content. However, the AG concludes that, thanks to safeguards contained in Article 17, the interference with free speech was not quite strong enough to be incompatible with the EU’s Charter of Fundamental Rights.
Here’s the slightly more detailed version: The EU Copyright Directive recognizes the right to legitimate uses of copyright-protected material, including the right to rely on exceptions and limitations for content such as reviews or parody. The AG opinion acknowledges that these protections are enforceable and stresses the importance of out of court redress mechanisms and effective judicial remedies for users. The AG points out that Article 17 grants users ex ante protection, protection at the moment they upload content, which would limit permissible filtering and blocking measures. Hence, in contrast to several EU Member States that have ignored the fundamental rights perspective altogether, the AG interprets Article 17 as requiring content hosts to pay strong attention to user rights’ safeguards and legitimate uses.
As the Republic of Poland submits, complex issues of copyright relating, inter alia, to the exact scope of the exceptions and limitations cannot be left to those providers. It is not for those providers to decide on the limits of online creativity, for example by examining themselves whether the content a user intends to upload meets the requirements of parody. Such delegation would give rise to an unacceptable risk of ‘over-blocking’. Those questions must be left to the court.
The AG reaffirms the “ban of mandated general monitoring” of user content, which is an important principle under EU law, and rejects an interpretation of Article 17 in which providers are “turned into judges of online legality, responsible for coming to decision on complex copyright issues.” To minimize the risk of overblocking legitimate user content, platform providers should only actively detect and block manifestly infringing content, meaning content that is “identical or equivalent” to the information provided by rightsholders, the AG opinion says. Such content could be presumed illegal. By contrast, in all ambiguous situations potentially covered by exceptions and limitations to copyright, such as transformative works or parody, priority must be given to freedom of expression and preventive blocking is not permitted.
While the AG’s approach reduces the risk of overblocking, it unfortunately permits mandated upload filters in principle. The opinion fails to acknowledge the limits of technical solutions and could, in practical terms, make error-prone copyright matching tools, such as those used by YouTube, a legal standard. It’s also unfortunate that the AG considers the safeguards set out by Article 17 sufficient, trusting that a user-friendly implementation by national lawmakers or interpretation by courts will do the trick.
These flaws aside, the opinion is a welcome clarification that there are limits to the use of upload filters. It should serve as a warning to Member States that, without sufficient user safeguards, national laws will undermine the “essence” of the right to freedom of expression. This is good news for users and bad news for States such as France or the Netherlands, whose laws implementing Article 17 offer far too little protections for legitimate uses of copyright.
The decision is the result of a legal challenge by the Republic of Poland, questioning the compatibility of Article 17 with the EU’s Charter of Fundamental Rights of the European Union. The opinion now goes to the Court of Justice for final judgment.
On May 12, the UK government published a draft of its Online Safety Bill, which attempts to tackle illegal and otherwise harmful content online by placing a duty of care on online platforms to protect their users from such content. The move came as no surprise: over the past several years, UK government officials have expressed concerns that online services have not been doing enough to tackle illegal content, particularly child sexual abuse material (commonly known as CSAM) and unlawful terrorist and extremist content (TVEC), as well as content the government has deemed lawful but “harmful.” The new Online Safety Bill also builds upon the government’s earlier proposals to establish a duty of care for online providers laid out in its April 2019 White Paper and its December 2020 response to a consultation.
EFF and OTI submitted joint comments as part of that consultation on the Online Harms White Paper in July 2019, pushing the government to safeguard free expression as it explored developing new rules for online content. Our views have not changed: while EFF and OTI believe it is critical that companies increase the safety of users on the internet, the recently released draft bill reflects serious threats to freedom of expression online, and must be revised. In addition, although the draft features some notable transparency provisions, these could be expanded to promote meaningful accountability around how platforms moderate online content.Our Views Have Not Changed: Broad and Vague Notion of Harmful Content
The bill is broad in scope, covering not only “user-to-user services” (companies that enable users to generate, upload, and share content with other users), but also search engine providers. The new statutory duty of care will be overseen by the UK Office of Communications (OFCOM), which has the power to issue high fines and to block access to sites. Among the core issues that will determine the bill’s impact on freedom of speech is the concept of “harmful content.” The draft bill opts for a broad and vague notion of harmful content that could reasonably, from the perspective of the provider, have a “significant adverse physical or psychological impact” on users. The great subjectivity involved in complying with the duty of care poses a risk of overbroad removal of speech and inconsistent content moderation.
In terms of illegal content, “Illegal content duties” comprise the obligations of platform operators to minimize the presence of so-called “priority illegal content,” to be defined through future regulation, and a requirement to take down any illegal content upon becoming aware of it. The draft bill thus departs from the EU’s e-Commerce Directive (and the proposed Digital Services Act), which abstained from imposing affirmative removal obligations on platforms. For the question of what constitutes illegal content, platforms are put first in line as arbiters of speech: content is deemed illegal if the service provider has “reasonable grounds” to believe that the content in question constitutes a relevant criminal offence.
The bill also places undue burden on smaller platforms, raising significant concerns that it could erode competition in the online market. Although the bill distinguishes between large platforms (“Category 1”) and smaller platforms (“Category 2”) when apportioning responsibilities, it does not include clear criteria for how a platform would be categorized. Rather, the bill provides that the Secretary of State will decide how a platform is categorized. Without clear criteria, smaller platforms could be miscategorized and required to meet the bill’s more granular transparency and accountability standards. While all platforms should strive to provide adequate and meaningful transparency to their users, it is also important to recognize that certain accountability processes require a significant amount of resources and labor, and platforms that have large user bases do not necessarily also have access to corresponding resources. Platforms that are miscategorized as larger platforms may not have the resources to meet more stringent requirements or pay the corresponding fines, putting them at a significant disadvantage. The UK government should therefore provide greater clarity around how platforms would be categorized for the purposes of the draft bill, to provide companies sufficient notice of their responsibilities.
Lastly, the draft bill contains some notable transparency and accountability provisions. For example, it requires providers to issue annual transparency reports using guidance provided by OFCOM. In addition, the bill seeks to respond to previous concerns around freedom of expression online by requiring platforms to conduct risk assessments around their moderation of illegal content, and it requires OFCOM to also issue a transparency report which summarizes insights and best practices garnered from company transparency reports. These are good first steps, especially considering the fact that governments are increasingly using legal channels to request that companies remove harmful and illegal content.
However, it is important for the UK government to recognize that a one-size-fits-all approach to transparency reporting does not work, and often prevents companies from highlighting trends and data points that are most relevant to the subject at hand. In addition, the structure of the OFCOM transparency report suggests that it would mostly summarize insights, rather than provide accountability around how internet platforms and governments work together to moderate content online. Further, the draft bill does not significantly incorporate features such as providing users with notice and appeals process for content decisions, despite robust advocacy by content moderation and freedom of expression experts. Adequate notice and appeals are integral to ensuring that companies are providing transparency and accountability around their content moderation efforts, and are key components of the Santa Clara Principles for Transparency and Accountability in Content Moderation, of which EFF and OTI were among the original drafters and endorsers.UK government Should Revise the Draft Bill To Protect Freedom of Speech
As social media platforms continue to play an integral role in information sharing and communications globally, governments around the world are taking steps to push companies to remove illegal and harmful content. The newly released version of the UK Government’s Online Safety Bill is the latest example of this, and it could have a significant impact in the UK and beyond. While well intended, the bill raises some serious concerns around freedom of expression online, and it could do more to promote responsible and meaningful transparency and accountability. We strongly encourage the UK government to revise the current draft of the bill to better protect freedom of speech and more meaningfully promote transparency.
This post was co-written with Spandana Singh, Open Technology Institute (OTI).
Clearview AI extracts faceprints from billions of people, without their consent, and uses these faceprints to help police identify suspects. This does grave harm to privacy, free speech, information security, and racial justice. It also violates the Illinois Biometric Information Privacy Act (BIPA), which prohibits a company from collecting a person’s biometric information without first obtaining their opt-in consent.
Clearview now faces many BIPA lawsuits. One was brought by the ACLU and ACLU of Illinois in state court. Many others were filed against the company in federal courts across the country, and then consolidated into one federal courtroom in Chicago. In both Illinois and federal court, Clearview argues that the First Amendment bars these BIPA claims.
We disagree. Last week, we filed an amicus brief in the federal case, arguing that applying BIPA to Clearview’s faceprinting does not offend the First Amendment. Last fall, we filed a similar amicus brief in the Illinois state court case.
EFF has a longstanding commitment to protecting both speech and privacy at the digital frontier, and these cases bring these values into tension. Faceprinting raises some First Amendment interests, because it is collection and creation of information for purposes of later expressing information. However, as practiced by Clearview, this faceprinting does not enjoy the highest level of First Amendment protection, because it does not concern speech on a public matter, and the company’s interests are solely economic. Under the correct First Amendment test, Clearview may not ignore BIPA, because there is a close fit between BIPA’s goals (protecting privacy, speech, and information security) and its means (requiring opt-in consent).
A growing number of law enforcement agencies have used face surveillance to target Black Lives Matter protesters, including the U.S. Park Police, the U.S. Postal Inspection Service, and local police in Boca Raton, Broward County, Fort Lauderdale, Miami, New York City, and Pittsburgh. So Clearview is not the only party whose First Amendment interests are implicated by these BIPA enforcement lawsuits.
You might also be interested in the First Amendment arguments, recently filed in the federal lawsuit against Clearview, from the plaintiffs, the ACLU and ACLU of Illinois amici, and the Georgetown Law Center on Privacy & Technology amicus.
The seemingly endless battle against copyright infringement has caused plenty of collateral damage. But now that damages is reaching new levels, as copyright holders target providers of basic internet services. For example, Sony Music has persuaded a German court to order a Swiss domain name service (DNS) provider, Quad9, to block a site that simply indexes other sites suspected of copyright infringement. Quad9 has no special relationships with any of the alleged infringers. It simply resolves domain names, conveying the public information of which web addresses direct to which server, on the public internet, like many other service providers. In other words, Quad9 isn’t even analogous an electric company that provides service to a house where illegal things might happen. Instead, it’s like a GPS service that simply helps you find a house where you can learn about other houses where illegal things might happen.
This order is profoundly dangerous for several reasons. In the U.S. context, where injunctions like these are usually tied to specious claims of conspiracy, we have long argued that intermediaries which bear no meaningful relationship to the alleged infringement, and cannot therefore be held liable for it, should not be subject to orders like these in the first place. Courts do not have unlimited power; rather, judges should confine their orders to persons that are plausibly accused of infringement or acting in concert with infringers.
Second, orders like these create a moderator’s dilemma. Quad9 faces this order in large part because it provides a valuable service: blocking sites that pose technical threats. Sony argues that if Quad9 can block sites for technical threats, it can block them for copyright “threats” as well. As Quad9 rightly observes:
The assertion of this injunction is, in essence, that if there is any technical possibility of denying access to content by a specific party or mechanism, then it is required by law that blocking take place on demand, regardless of the cost or likelihood of success. If this precedent holds, it will appear again in similar injunctions against other distant and uninvolved third parties, such as anti-virus software, web browsers, operating systems, IT network administrators, DNS service operators, and firewalls, to list only a few obvious targets.
If you build it, they will come, and their demands will discourage intermediaries from offering services like these at all – to the detriment of internet users.
Third, orders like these are hopelessly overinclusive. Blocking entire sites inevitably means blocking content that is perfectly lawful. Moreover, courts may not carefully scrutinize the claims – keep in mind that U.S authorities persuaded a court to allow them to seize a popular music website for over a year, based solely on the say-so of a music industry association. To try avoid that kind of disruption, some intermediaries might also feel compelled to block preemptively. If so, the entire history of copyright lobbying shows that this tactic will not work. Copyright maximalists are never satisfied. The only way to avoid the pressure is to insist that copyright enforcement, and other forms of content moderation, happen at the right level of the internet stack.
Fourth, as the above suggests, blocking at the infrastructure level imports all of the flaws we see with content moderation at the platform level – and makes them even worse. The complete infrastructure of the internet, or the “full stack,” is made up of a range of intermediaries that range from consumer-facing platforms like Facebook or Pinterest, to ISPs like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as infrastructure providers like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.
For most of us, this stack is nearly invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that have to function correctly to get the content from creators to users all over the world. We may think about our ISP when it gets slow or breaks, but most of us don’t think about AWS at all. We are more aware of the content moderation decisions—and mistakes—made by the consumer-facing platforms.
We have detailed many times the chilling effects on speech and the other problems caused by opaque, bad, or inconsistent content moderation decisions from companies like Facebook. But when ISPs or intermediaries are forced to wade into the game and start blocking certain users and sites, it’s far worse. For one thing, many of these services have few, if any, competitors. For example, many people in the United States and overseas only have one choice for an ISP. If the only broadband provider in your area cuts you off because they (or your government) don’t like what you said online—or what some other user of the account said—you may lose access to a wide array of crucial services and information, like jobs, education, and health. And again, at the infrastructure level, providers usually cannot target their response narrowly. Twitter can shut down individual accounts; AWS can only deny service to the entire site, shutting down all speech including that which is entirely unobjectionable. And that is exactly why ISPs and intermediaries need to stay away from this fight if they can – and courts shouldn’t force them to do otherwise. The risks from getting it wrong at the infrastructure level are far too great.
European policymakers have recognized these risks. As the EU Commission recently stated it in its impact assessment to the Digital Services Act, actions taken in these cases can effectively disable access to entire services. Nevertheless, injunctions against infrastructure providers requiring them to block access to copyright-infringing websites are on the rise, whilst freedom of expression and information rights often take the back seat.
Finally, as we have already seen, these kinds of orders don’t stop with copyright enforcement – instead, copyright policing frequently serve as a model that is leveraged to shut down all kinds of content.
While EFF does not practice law in German courts, we urge allies in the EU to support Quad9 and push back against this dangerous order. Copyright enforcement is no excuse for suppressing basic, legitimate, and beneficial, internet operations.
We at EFF are devastated to learn of the passing of Sherwin Siy. He was a brilliant advocate and strategist who was dedicated to protecting and preserving the internet as a space for creativity, innovation and sharing. He was also a friend and generous mentor who shaped the present and future of tech policy by supporting and teaching others. We are grateful for the work he did, and deeply saddened to lose his voice, his perspective, and above all his spirit, in the work to come. The internet lost one of its champions. RIP Sherwin, we will miss you.
Want the latest news on your digital rights? Then you're in luck! Version 33, issue 4 of EFFector, our monthly-ish email newsletter, is out now! Catch up rising issues in online security, privacy, and free expression with EFF by reading our newsletter or listening to the new audio version below.
EFFECTOR 33.04 - Highest court hands down a series of critical Digital rights decisions
Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.