EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 33 min ago

25,000 EFF Supporters Have Told Apple Not To Scan Their Phones

Wed, 09/01/2021 - 12:13pm

Over the weekend, our petition to Apple asking the company not to install surveillance software in every iPhone hit an important milestone: 25,000 signatures. We plan to deliver this petition to Apple soon; and the more individuals who sign, the more impact it will have. We are deeply grateful to everyone who has voiced their concerns about this dangerous plan. 

SIGN THE PETITION

TELL APPLE: DON'T SCAN OUR PHONES

Apple has been caught off guard by the overwhelming resistance to its August 5th announcement that it will begin. In addition to numerous petitions like ours, over 90 organizations across the globe have urged the company to abandon its plans. But the backlash should be no surprise: what Apple intends to do will create an enormous danger to our privacy and security. It will give ammunition to authoritarian governments wishing to expand the surveillance, and because the company has compromised security and privacy at the behest of governments in the past, it's not a stretch to think they may do so again. Democratic countries that strive to uphold the rule of law have also pressured companies like Apple to gain access to encrypted data, and are very likely already considering how this system will allow them to do so more easily in the future.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system that enables screening, takedown, and reporting in its end-to-end messaging.  

Don’t let Apple betray its users. Tell them today: Don't scan our phones

 

Further Reading: 

Vaccine Passport Missteps We Should Not Repeat

Tue, 08/31/2021 - 11:58am

Vaccine mandates are becoming increasingly urgent from public health officials and various governments. As they roll out, we must protect users of vaccine passports and those who do not want to use—or cannot use—a digitally scannable means to prove vaccination. We cannot let the tools used to fight for public health be subverted into systems to perpetuate inequity or as cover for unrelated, unnecessary data collection. 

Over the past year, EFF has been tracking vaccine passport proposals and how they have been implemented. We have objections—especially when rolled out by opportunistic tech companies that are already creating digital inequity and mismanaging user data. We hope we can stop them from transforming into another layer of user tracking.

Paper proof of vaccination raises fewer concerns, as does a digital photo of a paper card displayed on a phone screen. Of much greater concern are scannable vaccination credentials, which might be used to track people’s physical movements through doors and across time. Thus, we oppose any use of scannable vaccination credentials. At a minimum, such systems must have a paper alternative, open source code, and design and policy safeguards to minimize the risk of tracking.

Last year “immunity passports” were proposed and sometimes implemented before the science was even well-developed on COVID-19 immunity and vaccination. Many governments and private companies apparently were driven less by informed public health and science, as by the need to promote economic movement. Some organizations and governments even took the opportunity to create a new, digital verification system for the vaccinated. The needed transparency and protection has been lacking, and so have clear boundaries to keep them from escalating into an unnecessary surveillance system. Even though we recognize that many vaccine credentialing systems have been implemented in good faith, there are several examples below of dangerous missteps that we hope will not be repeated.

New York State’s Excelsior Pass

Launched in April, this optional mobile application has had gradual adoption. Three key issues appeared with this deployment. 

First, IBM was not transparent on how this application was built. Instead, the company used vague buzzwords like “blockchain technology” that don’t paint a detailed picture on how they are keeping user data secure.

Second, the Surveillance Technology Oversight Project (S.T.O.P.), a member of the Electronic Frontier Alliance, uncovered a contract that New York State had with IBM, outlining a “phase 2” of the passport. It would have not only a significantly higher price tag ($2.5 million to $17 million), but an expansion on what Excelsior can hold, such as driver’s licenses and other health records.

Third, a bill was introduced to protect Covid data a month after the Excelsior Pass was launched. It passed the NY State Assembly, but was never taken up by the NY State Senate. The protections should have passed through before the state rolled out the Excelsior Pass.

A “Clear” Path to Centralizing Vaccination Credentials with Other Personal Data

CLEAR displays a company slogan at San Francisco’s airport.

CLEAR already holds a place in major airports across the United States as the only private company in TSA’s Registered Traveler program. So this company was primed for launching their Health Pass, which is intended to facilitate Covid screening by linking health data to biometric-based digital identification. CLEAR’s original business model was born out of a previous rush to security, in a post-9/11 world. Now they are there for the next rushed security task: vaccination verification for travel. In the words of CLEAR’s Head of Public Affairs, Maria Comella, to Axios:

“CLEAR’s trusted biometric identity platform was born out of 9/11 to help millions of travelers feel safe when flying. Now, CLEAR’s touchless technology is able to connect identity to health insights to help people feel confident walking back into the office.”

A restaurant reservation app, OpenTable, just announced plans to integrate CLEAR’s vaccination credentials into its own system. There is no logical limit to how centralized digital identifications like those created by CLEAR might spread into our lives by facilitating proof of vaccination, and with it new vectors for tracking our movements and activities.

Of course CLEAR is not the only company openly luring large government clients to merge scannable proof-of-vaccination systems into larger digital identification and data storage systems. For example, the National Healthcare System in the U.K. contracted with Entrust, another company that openly contemplated turning vaccination credentials into national identification systems (which EFF opposes). With no federal laws adequately protecting the privacy of our data, we are being asked to trust the word of profit-driven companies that continue to grow through harvesting and monetizing our data in all forms. 

Likewise, U.S. airlines are using vaccine passports, subject to policies that reserve the corporate prerogative to sell data about customers to third parties. So any scan of passengers’ health information can be added to profiles of the thousands that travel each year.

Illinois’ Approach

In Illinois earlier this month, the state’s “Vax Verify” system launched to offer digital credentials to vaccinated citizens. A glaring flaw is the use of Experian, the controversial data broker, to verify the identity of those accessing the portal. The portal even asks for Social Security numbers (optional) to streamline the process with Experian.

Many Americans have been targets of Covid-based scams, so one of the main pieces of advice is to freeze your credit during this turbulent time. This advice is offered on Experian’s own website, for example. However, to access Illinois’ Vax Verify, users must unfreeze their credit with Experian to complete registration. This prioritizes a digital vaccine credential over the user’s own credit protection. 

The system also defaults to sharing immunization status with third parties. The FAQ page explains that users may retroactively revoke so-called “consent” to this sharing.

A New Inequity

We have had concerns about "vaccine passports" and "immunity passports" being used to place company profit over true community health solutions and amplify inequity. 

Sadly, we have seen many take the wrong path. And it could get worse. With more than one hundred COVID-19 vaccine candidates undergoing clinical trials across the world, makers of these new digital systems are advocating for a “chain of trust” that marks only certain labs and health institutions as valid. This new marker will deliberately leave behind many people across the world whose systems may not be able to adhere to the requirements these new digital vaccine proof systems create. For example, many of these new systems entail elements of public key infrastructure governance for public key cryptography, which creates a list of "trusted" public keys associated with "trusted" health labs. But the definition of technical “trustworthiness” has not been agreed upon or enforced pre-Covid, raising concerns that imposing such systems on the world will lock out hundreds of millions of people from being able to obtain visas or even travel—all because their country's labs may not clear these unnecessary technical hurdles. An example would be the EU’s Digital COVID Certificate system. That requires a significant list of technical requirements to achieve interoperability that include data availability, data storage formats, and specific communication and data serialization protocols.

Overview of EU’s Digital COVID Certification System. Source: https://ec.europa.eu/health/sites/default/files/ehealth/docs/digital-green-certificates_v5_en.pdf

This primary reliance on digital passports effectively pushes out presenting paper options for international travel, and potentially domestic travel as well. They devalue paper as a proper check of vaccination proof because the verifier can’t scan a public key. The only viable paper option is printing out the QR Code of the digitally verified credential, which still locks people into these new systems of verification. 

These new trust-based systems, if implemented in a way that automatically disqualifies people who received genuine vaccinations, will cause dire effects for years to come. It sets up a world where certain people can move about easily, and those who have already had a hard time with visas will experience another wall to climb. Vaccines should be a tool to reopen doors. Digital vaccine passports, as we've seen them deployed so far, are far more likely to slam them shut.

Starve the Beast: Monopoly Power and Political Corruption

Tue, 08/31/2021 - 8:54am
Docket of the Living Dead

In 2017, Federal Communications Commission Chairman Ajit Pai - a former Verizon lawyer appointed by Donald Trump - announced his intention to dismantle the Commission’s hard-won 2015 Network Neutrality regulation. The 2015 order owed its existence to people like you, millions of us who submitted comments to the FCC demanding commonsense protection from predatory ISPs.

After Pai’s announcement, those same millions - and millions of their friends - flooded the FCC’s comment portal, actually overwhelming the FCC’s servers and shutting them down (the FCC falsely claimed it had been hacked). The comments from experts and everyday Americans overwhelmingly affirmed the consensus from the 2015 net neutrality fight: Americans love net neutrality and they expect their regulators to enact it.

But a funny thing happened on the way to the FCC vote: thousands, then millions of nearly-identical comments flooded into the Commission, all opposed to net neutrality. Some of these comments came from obviously made-up identities, some from stolen identities (including identities stolen from sitting US Senators!), and many, many dead people.. One million of them purported to be sent by Pornhub employees. All in all, 82% of the comments the FCC received were fake, and the overwhelming majority of fake comments opposed net neutrality. 

Sending all these fake comments was expensive. The telecoms industry paid millions to corrupt the political process. That bill wasn’t footed by just one company, either - an industry association paid for the fraud. 

How did that happen?

One Big, Happy Family

Well, telecoms is a highly concentrated industry where companies refuse to compete with one another: instead, they divide up the country into a series of exclusive territories, leaving subscribers with two or fewer ISPs to choose from

Not having to compete means that your ISP can charge more, deliver less, and pocket the difference. As a sector, the US ISP market is wildly profitable. That’s only to be expected: when companies have monopolies, value is transferred from the company’s customers and workers to its executives and shareholders. That’s why executives love monopolies and why shareholders love to invest in them.

Profits can be converted into policies: the more extra money you have, the more lobbying you can do. Very profitable companies find it much easier to get laws and regulations passed that benefit them than less profitable ones do, and even less profitable companies get their way from lawmakers and regulators more often than the public does.

But excessive profits aren’t the only reason an industry can get its way in the political arena. When an industry is composed of hundreds of small- and medium-sized firms, they aren’t just less profitable (because they compete with one another to lower prices, raise wages and improve their products and services), they also have a harder time cooperating.

When the people who control your industry number in the hundreds or thousands, they have to rent a convention center if they want to get together to hammer out a common lobbying policy, and they’ll struggle to do so - a thousand rival execs can’t even agree on what lunch to buy, much less what laws to buy. 

When control over the industry dwindles to a handful of people, they can all fit around a single table. They often do

Competition Is For Losers

And that’s how tens of millions of fake anti-net neutrality comments ended up in front of the FCC. A highly concentrated ISP sector decided to cooperate, rather than compete, with each other. 

This let them rip off the country and make a lot of money. Some of that money was set aside for lobbying, and since there are only a handful of companies that dominate the sector, it was easy for them to decide what to lobby for.

To top it all off, the guy they had to convince was one of their own, a former executive at one of the monopolistic companies that funded the fraud campaign. He was, unsurprisingly, very sympathetic to their cause.

Monopolies equip companies with vast stockpiles of ammo to use in the policy wars. Monopolies reduce the number of companies that have to agree on a target for that ammo. 

How It Started/How It’s Going

Back in the 2000s, the tech sector was on the ropes. Google had two lobbyists in DC. Despite the prominence of a few companies (Microsoft, Yahoo, Netscape), most of the web was in the hands of hundreds of small and medium-sized companies, many of them struggling with the post-9/11 economic downturn.

Meanwhile, the entertainment industry was highly concentrated and highly disciplined. Waves of genuinely idiotic tech laws and regulations crashed over the tech sector, and only some fast, savvy courtroom work by nonprofits and inspired grassroots activism kept these outlandish proposals at bay.

The tech sector of the early 2000s had a much higher aggregate valuation than the entertainment sector, and it was more dynamic and diverse, with new companies appearing out of nowhere and rising to prominence in just a few years, displacing seemingly unassailable giants whose dominance proved fleeting

But the entertainment industry was concentrated. Music was dominated by six major labels (today it’s three, thanks to mergers and acquisitions); TV, film and publishing were likewise dominated by a handful of companies (and, likewise, the number of companies has contracted since thanks to a series of mergers). Some of these major labels and studios and broadcasters had the same corporate owners, a trend that has only accelerated since the turn of the century.

These monopolized industries possessed the two traits necessary to secure policies favorable to their interests: excessive monopoly profits and streamlined monopoly collaboration. They had a lot of ammo and they all agreed on a set of common targets to blast away at.

Today, Big Tech is just as concentrated as Big Content, and it has an army of lobbyists who impose its will on legislators and regulators. The more concentrated an industry is, the more profitable it is, the more profitable it is, the more lobbyists it has, the more lobbyists it has, the more it gets its way.

Clash of the Titans

Monopoly begets monopoly. Before the rise of Big Tech, the tech sector was caught in a vicious squeeze between the monopolistic ISP industry and the monopolistic entertainment industry. Today, Big Tech, Big Content and Big Telco each claim the right to dominate our digital lives, and ask us to pick a giant to root for.

We’re not on the side of giants. We’re on the side of users, of the public interest. Big companies can have their customers’ or users’ backs, and when they do, we’ve got their back, too. But we demand more than the right to choose which giant we rely on for a few dropped crumbs.

That’s why we’re interested in competition policy and antitrust. We don’t fetishize competition for its own sake. We want competition because a competitive sector has a harder time making its corporate priorities into law.  The law should reflect the public interest and the will of the people, not the mobilized wealth of corporate barons who claim no responsibility to anyone, save their shareholders.

Have You Tried Turning It Off and On Again?

Even critics of the tech antitrust surge agree that the tech sector is unhealthily concentrated.  But they are apt to point to the outwardly abusive conduct of the sector: using copyright claims  to block interoperability, weaponizing privacy to shut out rivals, selling out net neutrality, embracing censorship, and so on.

We’re in vigorous agreement with this analysis. All of this stuff is terrible for competition. But all this stuff is also enabled by the lack of competition. These are expensive initiatives, funded by monopoly profits, and they’re controversial initiatives that rely on a monopolist’s consensus.

It’s true that sometimes a monopolist defends the public interest while sticking up for its own interests. The Google/Oracle fight over API copyrights saw two billionaires burning millions of dollars to promote their own self-interest. Oracle wanted to change copyright law in a way that would have let it take billions away from Google. Google wanted to keep its billions. For Google to keep its billions, it had to stand up for what’s right: namely, that APIs can’t be copyrighted because they are functional, and because blocking interoperability is counter to the public interest.

If we demonopolized Google - if we forced it to operate in a competitive advertising environment and lowered its profits - then it might not be able to fight off the next Oracle-style assault on the public interest. That’s not an argument for increasing Google’s power - it’s an argument for decreasing Oracle’s power.

Because more often than not, Google and Oracle are on the same side, along with the rest of the tech giants. 

And now that the FCC is getting new leadership, it’s a safe bet that we’ll be fighting about net neutrality again, this time to restore it, with the same shady tactics that we saw in 2017. Google might be our ally in fighting back - net neutrality is in the tech sector’s interest, after all - but then again, maybe they’ll cut another deal with a monopolistic telco

The way to stop Big Telco from shredding the public interest isn’t to make Google as large as possible and hope it doesn’t switch sides (again): it’s to shrink Big Telco until it fits in a bathtub.

Profits are power. Concentration is power. Concentration is profitable. Profits let merging companies run roughshod over the FTC and become more concentrated. Lather, rinse, repeat.

The system of monopoly is a ravenous beast, a cycle that turns money into power into money into power. We have to break the cycle.

We have to starve the beast.

The Federal Circuit Has Another Chance to Get it Right on Software Copyright

Mon, 08/30/2021 - 6:08pm

When it comes to software, it seems that no matter how many times a company loses on a clearly wrong copyright claim, it will soldier on—especially if it can find a path to the U.S. Court of Appeals for the Federal Circuit. The Federal Circuit is supposed to be almost entirely focused on patent cases, but a party can make sure its copyright claims are heard there too by simply including patent claims early in the litigation, and then dropping them later. In SAS v. WPL, that tactic means that a legal theory on software copyrightability that has lost in three courts across two countries will get yet another hearing. Hopefully, it will be the last, and the Federal Circuit will see this relentless opportunism for what it is.

That outcome, however correct, is far from certain. The Federal Circuit got this issue very wrong just a few years ago, in Oracle v. Google. But with the facts stacked against the plaintiff, and a simpler question simpler to decide, the Federal Circuit might get it right this time.

The parties in the case, software companies SAS Institute Inc. (SAS) and World Programming Ltd. (WPL), have been feuding for years in multiple courts in the U.S. and abroad. At the heart of the case is SAS’s effort to effectively own the SAS Language, a high-level programming language used to write programs for conducting statistical analysis. The language was developed in the 1970s at a public university and dedicated to the public domain, as was software designed to convert and execute SAS-language programs. Works in the public domain can be used by anyone without permission. That is where the original SAS language and software executing it lives.

A few years later, however, some of its developers rewrote the software and founded a for-profit company to market and sell the new version. It was alone in doing so until, yet more years later, WPL developed its own, rival software that can also convert and execute SAS-Language programs. Confronted with new competition, SAS ran to court, first in the U.K., then in North Carolina, claiming copyright infringement. It lost both times.

Perhaps hoping that the third time will be the charm, SAS sued WPL in Texas for both patent and copyright infringement. Again, it lost—but it decided to appeal only the copyright claims. As with Oracle v Google, however, the fact that the case once included patent claims—valid or not—was enough to land it before the Federal Circuit.

It is undisputed that WPL didn’t copy SAS’s actual copyrighted code. Instead, SAS claims WPL copied nonliteral, functional elements of its system: input formats (which say how a programmer should input data to a program to make the program work properly) and output designs (which the computer uses to let the programmer view the results correctly). These interfaces specify how the computer is supposed to operate—in response to inputs in a certain format, produce outputs that are arranged in a certain design. But those interfaces don’t instruct the computer how it should perform those functions, for which WPL wrote its own code. SAS’s problem is that copyright law does not, and should not, grant a statutory monopoly in these functional elements of a computer program.

SAS is desperately hoping that the Federal Circuit will say otherwise, based on the Federal Circuit’s previous ruling, in Oracle v. Google, that choosing among various options can suffice to justify copyright protection. In other words, if a developer had other programming options, the fact that it chose a particular path can allegedly be “creative” enough to merit exclusive rights for 70+ years. As we explained in our amicus brief, that reliance is misplaced.

First, the facts of this case are different –WPL, unlike Google, didn’t copy any actual code. Again, this is undisputed. Second, Oracle v. Google was based on a fundamentally incorrect assumption that the Ninth Circuit (the jurisdiction from which Oracle arose and, therefore, whose law the Federal Circuit was bound to apply) would accept the “creative choices” theory. How do we know that assumption was wrong? Because the Ninth Circuit later said so, in a different case.

But SAS should lose for another reason. In essence, it is trying to claim copyright in processes and methods of operation–elements that, if they are protectable at all, are protected only by patent. If SAS couldn’t succeed on its patent claims, it shouldn’t be allowed to rely on copyright as a backstop to cover the same subject matter. In other words, SAS cannot both (1) evade the limits on patent protection such as novelty, obviousness, eligible patent subject matter, the patent claim construction process, etc.; and, at the same time (2) evade the limits on copyright protection by recasting functional elements as “creative” products.

In addition to these point, our brief hopes to remind the court that the copyright system is intended to serve the public interest, not simply the financial interest of rightsholders such as SAS. The best way for the Federal Circuit to serve that public interest here is to defend the limits on copyright protection for functional parts of computer programs, and to clarify its previous erroneous computer copyrightability ruling in Oracle v. Google. We hope the court agrees. 

Related Cases: Oracle v. Google

Victory! Lawsuit Proceeds Against Clearview’s Face Surveillance

Mon, 08/30/2021 - 5:10pm

Face surveillance is a growing menace to racial justice, privacy, and free speech. So EFF supports laws that ban government use of this dangerous technology, and laws requiring corporations to get written opt-in consent from a person before collecting their faceprint.

One of the worst offenders is Clearview AI, which extracts faceprints from billions of people without their consent and uses these faceprints to help police identify suspects. For example, police in Miami worked with Clearview to identify participants in a recent protest. Such surveillance partnerships between police and corporations are increasingly common.

Clearview’s faceprinting violates the Illinois Biometric Information Privacy Act (BIPA), which requires opt-in consent to obtain someone’s faceprint. As a result, Clearview now faces many BIPA lawsuits. One was brought by the ACLU and ACLU of Illinois in state court. Many others were filed against the company in federal courts across the country and then consolidated into one federal courtroom in Chicago. In both Illinois and federal court, Clearview argues that the First Amendment bars these BIPA claims. We disagree and filed an amicus brief saying so in each case.

Last week, the judge in the Illinois state case rejected Clearview’s First Amendment defense, denied the company’s motion to dismiss, and allowed the ACLU’s lawsuit to move forward. This is a significant victory for our privacy over Clearview’s profits.

The Court’s Instructive Reasoning

The court began its analysis by holding that faceprinting “involves expression and its predicates, which are entitled to some First Amendment protection.” We agree. EFF has long advocated for First Amendment protection of the right to record on-duty police and the right to code.

The court then held that Clearview’s faceprinting is not entitled to “strict scrutiny” of the speech restraint (one of the highest levels of First Amendment protection) but instead is entitled to “intermediate scrutiny.” We agree, because (as our amicus briefs explain) Clearview’s faceprints do not address a matter of public concern, and Clearview has solely commercial purposes.

Applying intermediate scrutiny, the court upheld the application of BIPA’s opt-in consent requirement to Clearview’s faceprinting. The court emphasized Illinois’ important interests in protecting the “privacy and security” of the public from biometric surveillance, including the “difficulty in providing meaningful recourse once a person’s [biometrics] have been compromised.” The court further explained that the opt-in consent requirement is “no greater than necessary” to advance this interest because it “returns control over citizens’ biometrics to the individual whose identities could be compromised.”

As to Clearview’s argument that BIPA hurts its business model, the court stated: “That is a function of having forged ahead and blindly created billions of faceprints without regard to the legality of that process in all states.”

Read here the August 27, 2021, opinion of Judge Pamela McLean Meyerson of the Cook County (Illinois) Circuit Court.

EFF to Council of Europe: Flawed Cross Border Police Surveillance Treaty Needs Fixing—Here Are Our Recommendations to Strengthen Privacy and Data Protections Across the World

Mon, 08/30/2021 - 5:10pm

EFF has joined European Digital Rights (EDRi), the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), and other civil society organizations in recommending 20 solid, comprehensive steps to strengthen human rights protections in the new cross border surveillance draft treaty that is under review by the Parliamentary Assembly of the Council of Europe (PACE). The recommendations aim to ensure that the draft treaty, which grants broad, intrusive police powers to access user information in criminal cross border investigations, contains a robust baseline to safeguard privacy and data protection.

From requiring law enforcement to garner independent judicial authorization as a condition for cross border requests for user data, to prohibiting police investigative teams from bypassing privacy safeguards in secret data transfer deals, our recommendations submitted to PACE will add much-needed human rights protections to the draft Second Additional Protocol to the Budapest Convention on Cybercrime. The recommendations seek to preserve the Protocol’s objective—to facilitate efficient and timely cross-border investigations between countries with varying legal systems—while embedding safeguards protecting individual rights. 

Without these amendments, the Protocol’s credibility is in question. The Budapest Cybercrime Convention has been remarkably successful in terms of signatories—large and small states from around the globe have ratified it. However, Russia’s long-standing goal to replace the treaty with its own proposed UN draft convention may be adding pressure on the Council of Europe (CoE) to rush its approval instead of extending its terms of reference to properly allow for a meaningful non-stakeholder consultation. But if the CoE intends to offer a more human right protective approach to the UN Cybercrime initiative, it must lead by example by fixing the primary technical mistakes we have highlighted in our submission and strengthen privacy and data protection safeguards in the draft Protocol. 

This post is the first of a series of articles describing our recommendations to PACE. The series will also explain how the Protocol will impact legislation in other countries.  The draft Protocol was  approved by the  Council of Europe’s Cybercrime Committee (T-CY) in May 28th following an opaque, several-year process largely commandeered by law enforcement.  

Civil society groups, data protection officials, and defense attorneys were sidelined during the process, and the draft Protocol reflects this deeply flawed and lopsided process. PACE can recommend further amendments to the draft during the treaty’s adoption and final approval process. EFF and partners urge PACE to use our recommendations to adopt new Protocol amendments to protect privacy and human rights across the globe. 

Mischaracterizing the Intrusive Nature of Subscriber Data Access Powers 

One of the draft’s biggest flaws is its treatment, in Article 7, of subscriber data, the most sought-after information by law enforcement investigators. The Protocol’s explanatory text erroneously claims that subscriber information “does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” so it’s less sensitive than other categories of data. 

But, as is increasingly recognized around the world, subscriber information such as a person’s address and telephone number, under certain conditions, is frequently used by police to uncover people’s identities and link them to specific online activities that reveal details of their private lives. Disclosing the identity of people posting anonymously exposes intimate details of individuals’ private lives. The Protocol's dismissive characterization of subscriber data directly conflicts with judicial precedent, particularly when considering the Protocol’s broad definition of subscriber information, which includes IP addresses and other online identifiers. 

In our recommendations, we therefore urge PACE to align the draft explanatory text’s description of subscriber data with judicial opinions across the world that recognize it as highly sensitive information. Unfettered access to subscribers’ data encroaches on the right to privacy and anonymity, and people’s right to free expression online, putting journalists, whistleblowers, politicians, political dissidents, and others at risk. 

Do Not Mandate Direct Cooperation Between Service Providers and Foreign Law Enforcement

Article 7 calls upon States to adopt legislation that will allow law enforcement in one country to request the production of subscriber data directly from companies located in another country under the requesting country’s legal standard. Due to the variety of legal frameworks among the Parties’ signatories, some countries’ laws authorize law enforcement access subscriber data without appropriate safeguards, such as without prior court authorization and/or a reasonable grounds requirement. The article applies to any public or private service providers, defined very broadly to encompass internet service providers, email and messaging providers, social media sites, cell carriers, host and catching services, regardless whether free of charge or for renumeration, and regardless of whether to the public or in a closed group (e.g. community network).

For countries with strong legal safeguards, Article 7 will oblige them to remove any law that will impede local service providers holding subscriber data from voluntarily responding to requests for that data from foreign agencies or governments. So, a country that requires independent judicial authorization for local internet companies to produce information about their subscribers, for example, will need to amend its law so companies can directly turn over subscriber data to foreign entities. 

We have criticized Article 7 for failing to provide, or excluding, critical safeguards that are included in many national laws. For example, Article 7 does not include any explicit restrictions on targeting activities which implicate fundamental rights, such as freedom of expression or association, and categorically prevents any state from requiring foreign police to demonstrate that the subscriber data they seek will advance a criminal investigation before justifying access to it. 

This is why we've urged PACE to remove Article 7 entirely from the text of the Protocol. States would still be able to access subscriber data in cross-border contexts, but would instead rely on another provision of the Protocol (Article 8), which also has some issues but includes more safeguards for human rights. 

If Article 7 is retained, the Protocol should be amended to make it easier for states to limit its scope of application. As the text currently stands, countries must decide whether to adopt Article 7 or not when implementing the draft Protocol. But the scope of legal protection many states provide for subscriber data is evolving as many courts and legislatures are increasingly recognizing that access to this personal data can be intrusive and may require additional safeguards. As drafted, if a signatory to the Protocol adds more safeguards to its subscriber data access regime—out of public policy concerns or in response to a court decision—extending these safeguards to foreign police will place it in violation of its obligations under the Protocol. 

Because the draft Protocol gives law enforcement powers with direct impact on human rights and will be available to a diverse number of signatories with varying criminal justice systems and human rights records, we recommend that it provide the additional safeguards for cross border data requests:

  • Allow a Party to require independent judicial authorization for foreign requests for subscriber data issued to service providers in its territory. Or even better, we would like to see a general obligation compelling independent supervision on every cross-border subscriber data request.
  • Allow authorities in the country where service providers are located to be notified about subscriber data requests and given enough information to assess their impact on fundamental rights and freedoms; and
  • Adopt legal measures to ensure that gag requests—confidentiality and secrecy requests—are not inappropriately invoked when law enforcement make cross-border subscriber data access demands.

We are grateful to PACE for the opportunity to present our concerns as it formulates its own opinion and recommendations before the treaty reaches CoE’s final body of approval, the Council of Ministers. We hope PACE will take our privacy and human rights concerns seriously. In recent weeks EFF and the world has learned that governments across the globe have targeted journalists, human rights activists, dissidents, lawyers, and private citizens for surveillance because of their work or political viewpoints. Regimes are weaponizing technology and data to target those who speak out. We strongly urge PACE to adopt our recommendations for adding strong human rights safeguards to the Protocol to ensure that it doesn’t become a tool for abuse. 

Apple’s Plan to Scan Photos in Messages Turns Young People Into Privacy Pawns

Fri, 08/27/2021 - 2:45pm

This month, Apple announced several new features under the auspices of expanding its protections for young people, at least two of which seriously walk back the company’s longstanding commitment to protecting user privacy. One of the plans—scanning photos sent to and from child accounts in Messages—breaks Apple’s promise to offer end-to-end encryption in messaging. And when such promises are broken, it inevitably opens the door to other harms; that’s what makes breaking encryption so insidious. 

Apple’s goals are laudable: protecting children from strangers who use communication tools to recruit and exploit them, and limiting the spread of child sexual abuse material. And it’s clear that there are no easy answers when it comes to child endangerment. But scanning and flagging Messages images will, unfortunately, create serious potential for danger to children and partners in abusive households. It both opens a security hole in Messages, and ignores the reality of where abuse most often happens, how dangerous communications occur, and what young people actually want to feel safe online. 

SIGN THE PETITION

TELL APPLE: DON'T SCAN OUR PHONES

How Messages Scanning Works

In theory, the feature works like this: when photos are sent via Messages between users who are under a certain age (13), those photos will be scanned by a machine learning algorithm. If the algorithm determines that the photo contains “sexually explicit” material, it will offer the user a choice: don’t receive or send the photo, and nothing happens; or choose to receive or send the photo, and the parent account on the Family Sharing plan will be notified. The system also scans photos of users between 13 and 17 years old, but only warns the user that they are sending or receiving an explicit photo, not the parents.  

Children Need In-App Abuse Reporting Tools Instead

The Messages photo scanning feature has three limitations meant to protect users. The feature requires an opt-in on the part of the parent on the Family Sharing plan; it allows the child account to decide not to send or receive the image; and it’s only applicable to Messages users that are designated as children. But it’s important to remember that Apple could change these protections down the road—and it’s not hard for a Family Sharing plan organizer to create a child account and force (or convince) anyone, child or not, to use it, easily enabling spying on non-children. 

Creating a better reporting system would put users in control—and children are users.

Kids do experience bad behavior online—and they want to report it. A recent study by the Center for Democracy and Technology finds that user reporting through online reporting mechanisms is effective in detecting “problematic content on E2EE [end-to-end encrypted] services, including abusive and harassing messages, spam, mis- and disinformation, and CSAM.” And, when given the choice to use online tools to do so, versus reporting to a caregiver offline, they overwhelmingly prefer using online tools. Creating a better reporting system like this would put users in control—and children are users.

But Apple’s plan doesn’t help with that. Instead, it treats children as victims of technology, rather than as users. Apple is offering the worst of both worlds: the company inserts its scanning tool into the private relationships between parents and their children, and between children and their friends looking for “explicit” material, while ignoring a powerful method for handling the issue. A more robust reporting feature would require real work, and a good intake system. But a well-designed system could meet the needs of younger users, without violating privacy expectations.

Research Shows Parents Are A Bigger Danger for Children than Strangers

Apple’s notification scheme also does little to address the real danger in many many cases.  Of course, the vast majority of parents have a child’s best interests at heart. But the home and family are statistically the most likely sites of sexual assault, and a variety of research indicates that sexual abuse prevention and online safety education programs can’t assume parents are protective. Parents are, unfortunately, more likely to be the producers of child sexual abuse material (CSAM) than are strangers. 

In addition, giving parents more information about a child’s online activity, without first allowing the child to report it themselves, can lead to mistreatment, especially in situations involving LGBTQ+ children or those in abusive households. Outing youth who are exploring their sexual orientation or gender in ways their parents may not approve of has disastrous consequences. Half of homeless LGBTQ youth in one study said they feared that expressing their LGBTQ+ identity to family members would lead to them being evicted, and a large percentage of homeless LGBTQ+ youth were forced to leave their homes due to their sexual orientation or gender. Leaving it up to the child to determine whether and to whom they want to report an online encounter gives them the option to decide how they want to handle the situation, and to decide whether the danger is coming from outside, or inside, the house. 

It isn’t hard to think of other scenarios where this notification feature could endanger young people. How will Apple differentiate a ten year-old sharing a photo documenting bruises that a parent gave them in places normally hidden by clothes—which is a way that abusers hide their abuse—from a nude photo that could cause them to be sextorted? 

Children Aren’t the Only Group Endangered by Apple’s Plan

Unfortunately, it’s not only children who will be put in danger by this notification scheme. A person in an abusive household, regardless of age, could be coerced to use a “child” account, opening Messages users up to tech-enabled abuse that’s more often found in stalkerware. While Apple’s locked down approach to apps has made it less likely for someone to install such spying tools on another’s iPhone, this new feature undoes some of that security. Once set up, an abusive family member could ensure that their partner or other household member doesn’t send any photos that Apple considers sexually explicit to others, without them being notified. 

Finally, if other algorithms meant to find sexually explicit images are any indication, Apple will likely sweep up all sorts of non-explicit content with this feature. Notifying a parent that a child is sending explicit material when they are not could also lead to real danger. And while we are glad that Apple’s notification scheme stops at twelve, even teenagers who will see only a warning when they send or receive what Apple considers a sexually explicit photo could be harmed. What impact does it have when a young woman receives a warning that a swimsuit photo being shared with a friend is sexually explicit? Or photos of breastfeeding? Or nude art? Or protest photos

Young People Are Users, Not Pawns

Apple’s plan is part of a growing, worrisome trend. Technology vendors are inserting themselves more and more regularly into areas of life where surveillance is most accepted and where power imbalances are the norm: in our workplaces, our schools, and in our homes. It’s possible for these technologies to help resolve those power imbalances, but instead, they frequently offer spying, monitoring, and stalking capabilities to those in power. 

This has significant implications for the future of privacy. The more our technology surveils young people, the harder it becomes to advocate for privacy anywhere else. And if we show young people that privacy isn’t something they deserve, it becomes all-too-easy for them to accept surveillance as the norm, even though it is so often biased, dangerous, and destructive of our rights. Child safety is important. But it’s equally important not to use child safety as an excuse to dangerously limit privacy for every user.

By breaking the privacy promise that your messages are secure, introducing a backdoor that governments will ask to expand, and ignoring the harm its notification scheme will cause, Apple is risking not only its privacy-protective image in the tech world, but also the safety of its young users.

Further Reading: 

Facebook’s Secret War on Switching Costs

Fri, 08/27/2021 - 1:30pm

When the FTC filed its amended antitrust complaint against Facebook in mid-August, we read it with interest. FTC Chair Lina Khan rose to fame with a seminal analysis of the monopolistic tactics of Amazon, another Big Tech giant, when she was just a law student, and we anticipated that the amended complaint would make a compelling case that Facebook had violated antitrust law.

Much of the coverage of the complaint focused on the new material defining “personal social networking” as a “relevant market” and making the case that Facebook dominated that market thanks to conduct banned under the antitrust laws. Because the court threw out the FTC’s previous complaint for failing to lay out Facebook’s monopoly status in sufficient detail, the new material is important to keep the case going. But as consequential as that market-defining work is, we want to highlight another aspect of the complaint - one that deals directly with the questions of what kinds of systems promote competition and what kinds of systems reduce it.

When antitrust enforcers and scholars theorize about Big Tech, they inevitably home in onnetwork effects.” A system is said to benefit from “network effects” when its value increases as more people use it - people join Facebook to hang out with the people who’ve already joined Facebook. Once new people join Facebook, they, in turn, become a reason for other people to join Facebook.

Network effects are real, and you can’t understand the history of networked computers without an appreciation for them. Famously, Bob Metcalfe, the inventor of Ethernet networking, coined “Metcalfe’s Law”: “the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2).” That is, every time you add a new user to a network you double the number of ways that users can connect with one another.

But while network effects are a good predictor of whether a service will get big, they can’t explain why it stays big. 

Cheap printers might entice many people to buy a printer for home, and incentivize many retailers to carry ink and paper, and encourage businesses and schools to require home printouts, but why would printer owners shell out big bucks for ink when there’s lots of companies making cheap cartridges?

Apple’s App Store might be a great way to find reliable apps (incentivizing people to buy iPhones, and incentivizing programmers to make apps for those iPhone owners), but why continue to shop there once you’ve found the apps you want, rather than dealing directly with the app’s makers, who might give you a discount because they no longer have to cut Apple in for a 30% commission?

And Facebook is full of people whose company you enjoy, but if you don’t like its ads, its surveillance, its deceptive practices, or its moderation policies, why not leave Facebook and find a better platform (or run your own), while continuing to send and receive messages from the communities, friends and customers who haven’t left Facebook (yet)? 

Short answer? Because you can’t. 

Big Printer periodically downgrades your printer with “security updates” that prevent it from using third party cartridges. Apple uses legal and technical countermeasures to stop you from running apps unless you buy them through its store. And Facebook uses all-out warfare and deceptive smear campaigns to stop anyone from connecting their tools to its platform.

Software locks, API restrictions, legal threats, forced downgrades and more - these are why Big Tech stays big. 

Collectively, these are a way to create high “switching costs” and high switching costs are the way to protect the dividends from network effects - to get big and stay big.

Switching costs are how economists refer to all the things you have to give up to switch between products or services. Leaving Facebook might cost you access to people who share your rare disease, or the final messages sent by a dying friend, or your business’s customers, or your creative audience, or your extended family. By blocking interoperability, Facebook ensures that participating in those relationships and holding onto those memories means subjecting yourself to its policies.

Back to the FTC’s amended complaint. In several places, the FTC investigators cite internal Facebook communications in which engineers and executives plotted to increase switching costs in order to make it harder for dissatisfied users to switch to a better, rival service. These examples, which we reproduce below, are significant in several ways:

  1. They show that the FTC is thinking about the practice of engineering in switching costs as anticompetitive and subject to antitrust scrutiny.
  2. They show that Facebook understands that it owes its success to both strong network effects and high switching costs, and that losing the latter could undo the former;
  3. They suggest that interoperability, which lowers switching costs and keeps them low, should be seen as an important tool in the antitrust enforcement toolbox, whether through legislation or as part of litigation settlements.

Here’s some examples of Facebookers discussing switching costs, from the FTC’s amended complaint.

Paragraph 87: Facebook Mergers and Acquisitions department emails Mark Zuckerberg to make the case for buying a company with a successful mobile social media strategy: "imo, photos (along with comprehensive/smart contacts and unified messaging) is perhaps one of the most important ways we can make switching costs very high for users - if we are where all users’ photos reside because the upoading [sic] (mobile and web), editing, organizing, and sharing features are best in class, will be very tough for a user to switch if they can’t take those photos and associated data/comments with them." [emphasis added]

Here, Zuckerberg’s executives are proposing that if Facebook could entice people to lock up their family photos inside Facebook’s silo, Facebook could make confiscating those pictures a punishment for disloyal users who switched platforms.

Paragraphs 144/145: A Facebook engineer discusses the plan to reduce interoperability selectively, based on whether a Facebook app developer might help people use rivals to its own projects. “[S]o we are literally going to group apps into buckets based on how scared we are of them and give them different APIs? How do we ever hope to document this? Put a link at the top of the page that says ‘Going to be building a messenger app? Click here to filter out the APIs we won’t let you use!’ And what if an app adds a feature that moves them from 2 to 1? Shit just breaks? And a messaging app can’t use Facebook login? So the message is, “if you’re going to compete with us at all, make sure you don’t integrate with us at all.’? I am just dumbfounded..[T]hat feels unethical somehow, but I’m having difficulty explaining how.  It just makes me feel like a bad person.” 

Paragraph 187: A Facebook executive describes how switching costs are preventing Google’s “Google+” service from gaining users: "[P]eople who are big fans of G+ are having a hard time convincing their friends to participate because 1/there isn’t [sic] yet a meaningful differentiator from Facebook and 2/ switching costs would be high due to friend density on Facebook.” [emphasis added]

Finally, in paragraph 212, the FTC summarizes the ways that switching costs constitute an illegitimate means for Facebook to maintain its dominance: “In addition to facing these network effects, a potential entrant in personal social networking services would also have to overcome the high switching costs faced by users. Over time, users of Facebook’s and other personal social networks build more connections and develop a history of posts and shared experiences, which they cannot easily transfer to another personal social networking provider.  Further, these switching costs can increase over time—a “ratchet effect”—as each user’s collection of content and connections, and investment of effort in building each, continually builds with use of the service.” [emphasis added]

And, the FTC says, Facebook knows it:

Facebook has long recognized that users’ switching costs increase as users invest more time in, and post more content to, a personal social networking service. For example, in January 2012, a Facebook executive wrote to Mr. Zuckerberg: ‘one of the most important ways we can make switching costs very high for users - if we are where all users’ photos reside . . . will be very tough for a user to switch if they can’t take those photos and associated data/comments with them.’ Facebook’s increase in photo and video content per user thus provides another indication that the switching costs that protect Facebook’s monopoly power remain significant.” [emphasis added]

Network effects are how you get users. Switching costs are how you hold them hostage. The FTC Facebook complaint makes it clear that antitrust regulators have wised up to this phenomenon, and not a moment too soon.

Digital Rights Updates with EFFector 33.5

Thu, 08/26/2021 - 7:41pm

Want the latest news on your digital rights? Then you're in luck! Version 33, issue 5 of EFFector, our monthly-ish email newsletter, is out now! Catch up on rising issues in online security, privacy, and free expression with EFF by reading our newsletter or listening to the new audio version below.

Listen on YouTube

EFFECTOR 33.05 - Apple's Plan to "Think Different" about Encryption opens a backdoor to your private life

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

When It Comes to Antitrust, It’s All Connected

Thu, 08/26/2021 - 6:27pm

A knife was stuck in antitrust in the 1980s and it bled out for the next 40 years. By the 1990s, the orthodox view of antitrust went like this: horizontal monopolies are bad, but vertical monopolies are efficient. In other words, it was bad for consumers when one company was the single source for a good or service, but if a company wanted to own every step in the chain, that was fine. Good, even.

Congress is concerned with Big Tech and has a number of bills aimed at keeping those companies in check. But just focusing on Google, Apple, Facebook, Amazon, and Microsoft won’t fix the problem we find ourselves in. Monopoly is at the heart of today’s business model. For everything.

In tech startups, companies run in the red for years, seeking to flood the zone, undercut the prices of their competitors, and buy up newcomers, until they are the last ones standing. For years, one of Uber's main goals was the destruction of Lyft. A series of leaks and PR disasters kept Uber from succeeding, but it is not the only company pursuing this tactic. Think about how many food delivery apps there used to be. And now think about how many have been bought up and merged with each other.

For internet service providers (ISPs), being a local monopoly is the goal. When Frontier went bankrupt, the public filings revealed that the ISP saw its monopoly territory as a bankable asset. That’s because, as internet access becomes a necessity for everyday life, a monopoly can guarantee a profit. They can also gouge us on prices, deliver worse service for more money, and avoid upgrading their services since there is no better option for consumers to choose.

In the world of books, movies, music, and television there are vanishingly few suppliers. Just the other week, publisher Hachette bought Workman Publishing. The fewer publishers there are, the more power they have to force libraries and schools into terrible contracts regarding e-books, giving second-class access to a public good. Disney continues to buy up properties and studios. After buying 21st Century Fox, Disney had 38% of the U.S. box office share in 2019. That means that over a third of the movie market reflected a single company’s point of view.

The larger these companies get, the harder it is for anyone to compete with them. The internet promised to open up opportunities, and businesses’ defensive move was to grow too big to compete with.

That’s the horizontal view. The vertical view is equally distressing. If you want an audiobook, Amazon has locked in exclusive deals for many of the most desired titles. If you want to watch movies or TV digitally, you are likely watching it on a subscription streaming service owned by the same company that made the content. And in the case of Comcast and AT&T, you could be getting it all on a subpar, capped internet service that you pay too much for and is, again, owned by the same company that owns the streaming service and the content.

The chain is too long and the links too big. In order to actually, permanently fix the problem being caused by a lack of competition in technology, we need laws that apply to all of these facets, not simply the social media services.

We’ve already seen the new administration change law to consider harms to ordinary people beyond just paying higher prices. Now let’s see it move beyond Facebook, Google, Apple, and Amazon, to include major ISPs and other abusive monopolists, and companies that wield monopoly power over narrower but important facets of the internet economy.

Chicago Inspector General: Police Use ShotSpotter to Justify Illegal Stop-and-Frisks

Tue, 08/24/2021 - 7:10pm

ˀThe Chicago Office of the Inspector General (OIG) has released a highly critical report on the Chicago Police Department’s use of ShotSpotter, a surveillance technology that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots based on a network of high-powered microphones located on some of the city’s streets. The OIG report finds that “police responses to ShotSpotter alerts rarely produce evidence of a gun-related crime, rarely give rise to investigatory stops, and even less frequently lead to the recovery of gun crime-related evidence during an investigatory stop.” This indicates that the technology is ineffective at fighting gun crime and inaccurate. This finding is based on the OIG’s quantitative analysis of more than 50,000 records over a 17-month period from the Chicago Police Department (CPD) and the city’s 911 dispatch center.

Even worse, the OIG report finds a pattern of CPD officers detaining and frisking civilians—a dangerous and humiliating intrusion on bodily autonomy and freedom of movement—based at least in part on “aggregate results of the ShotSpotter system.” This is police harassment of Chicago’s already over-policed Black community, and the erosion of the presumption of innocence for people who live in areas where ShotSpotter sensors are active. This finding is based on the OIG’s qualitative analysis of a random sample of officer-written investigatory stop reports (ISRs).

The scathing report comes just days after the AP reported that a 65-year-old Chicago man named Michael Williams was held for 11 months in pre-trial detention based on scant evidence produced by ShotSpotter. Williams’ case was dismissed two months after his defense attorney subpoenaed ShotSpotter. This and another recent report also show how ShotSpotter company officials have changed the projected location and designation of supposed gun shots in a way that makes them more consistent with police narratives.

There are more reasons why EFF opposes police use of ShotSpotter. The technology is all too often over-deployed in majority Black and Latinx neighborhoods. Also, people in public places—for example, having a quiet conversation on a deserted street—are often entitled to a reasonable expectation of privacy, without microphones unexpectedly recording their conversations. But in at least two criminal trials, one in Massachusetts and one in California, prosecutors tried to introduce audio of voices from these high-powered microphones. In the California case, People v. Johnson, the court admitted it into evidence. In the Massachusetts case, Commonwealth v. Denison, the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act.

Most disturbingly, ShotSpotter endangers the lives and physical safety of people who live in the neighborhoods to which police are dispatched based on false reports of a gunshot. Because of the uneven deployment of ShotSpotter sensors, these residents are disproportionately Black and Latinx. An officer expecting a civilian with a gun is more likely to draw and fire their own gun, even if there was in fact no gunshot. In the words of the Chicago OIG: “there are real and potential costs associated with use of the system, including … the risk that CPD members dispatched as a result of a ShotSpotter alert may respond to incidents with little contextual information about what they will find there—raising the specter of poorly informed decision-making by responding members.”

The Chicago OIG report is also significant because it signals growing municipal skepticism of ShotSpotter technology. We hope more cities will join Charlotte, North Carolina, and San Antonio, Texas, in canceling their contracts with ShotSpotter—which is currently deployed in over 100 U.S. cities. Chicago itself has just renewed its ShotSpotter contract, which cost the city $33 million between August 20, 2018 and August 19, 2021.

According to EFF's Atlas of Surveillance, at least 100 cities in the United States use some kind of acoustic gunshot detection, including ShotSpotter.

The Technology Is Not Effective at Fighting Gun Violence

The OIG report’s findings are very clear. Despite what the ShotSpotter marketing team would have you believe about their technology’s effectiveness, the vast majority of ShotSpotter alerts cannot be connected with any verifiable shooting incident. According to the OIG, just 9% of ShotSpotter alerts with a reported disposition (4,556 of 41,830) indicate evidence of a gun-related criminal offense. Similarly, just 2% of all Shotspotter alerts (1,056 of 50,176) correlate to an officer-written ISR.

Likewise, a 2021 report by the MacArthur Justice Center, quoted by the OIG, found that 86% of incidents in which CPD officers responded to a ShotSpotter alert did not result in the completion of a case report. In only 9% of CPD responses to ShotSpotter alerts is there any indication that a gun-related criminal offense occurred.

As Deputy Inspector General for Public Safety Deborah Witzburg said about this report, “It’s not about technological accuracy, it’s about operational value.” She added:

“If the Department is to continue to invest in technology which sends CPD members into potentially dangerous situations with little information––and about which there are important community concerns–– it should be able to demonstrate the benefit of its use in combatting violent crime. The data we analyzed plainly doesn’t do that. Meanwhile, the very presence of this technology is changing the way CPD members interact with members of Chicago’s communities. We hope that this analysis will equip stakeholders to make well-informed decisions about the ongoing use of ShotSpotter technology.”

The Technology Is Used to Justify Illegal Police Harassment and Erode the Presumption of Innocence 

Cross referencing the officer-written ISRs with ShotSpotter alerts, the OIG found a pattern of police conducting stop-and-frisks of civilians based at least in part on aggregate ShotSpotter data. This means police are deciding who to stop based on their supposed proximity to large numbers of alerts. Even when there are no specific alerts the police are responding to, the concentration of previous alerts in a specific area often works its way into police justification for stopping and searching a person.

The Fourth Amendment limits police stop-and-frisks. In Terry v. Ohio (1968), the Supreme Court held that police need “reasonable suspicion” of crime to initiate a brief investigative detention of a suspect, and need reasonable suspicion of weapon possession to conduct a pat-down frisk of that suspect. One judicially-approved factor that can give rise to reasonable suspicion, in conjunction with other factors, is a suspect’s presence in a so-called “high crime area.”

In light of the OIG and MacArthur reports, which show that the overwhelming majority of ShotSpotter “alerts” do not lead to any evidence of a gun, aggregate ShotSpotter data cannot reasonably be used as evidence that an area is high in crime. Therefore, courts should hold that it violates the Fourth Amendment for police to stop or frisk a civilian based on any consideration of aggregate ShotSpotter alerts in the area.

Specific cases highlighted in the OIG report demonstrate the way that aggregate ShotSpotter data, used as a blank check for stops and searches, erodes civil liberties and the presumption of innocence. In one case, for example, police wrongly used the prevalence of ShotSpotter alerts in the area, plus a bulge in a person’s hoodie pocket, to stop and pat them down, after they practiced their First Amendment right to give police the middle finger.

Cities Should Stop Using ShotSpotter

Far too often, police departments spend tens of millions of dollars on surveillance technologies that endanger civilians, disparately burden BIPOC, and invade everyone’s privacy. Some departments hope to look proactive and innovative when assuaging public fears of crime. Others seek to justify the way they are already policing, by “tech washing” practices and deployments that result in racial discrimination. Like predictive policing, police departments use ShotSpotter and its aura as a “cutting-edge” Silicon Valley company to claim their failed age-old tactics are actually new and innovative. All the while, no one is getting any safer.

The Chicago OIG report demonstrates that ShotSpotter “alerts” are unreliable and contribute to wrongful stop-and-frisks. It may not recommend that cities stop using ShotSpotter—but EFF certainly will, and we think that is the ultimate lesson that can be learned from this report. 

OnlyFans Content Creators Are the Latest Victims of Financial Censorship 

Tue, 08/24/2021 - 4:59pm

OnlyFans recently announced it would ban sexually explicit content, citing pressure from “banking partners and payout providers.” This is the latest example of a troubling pattern of financial intermediaries censoring constitutionally protected legal speech by shutting down accounts—or threatening to do so.  

OnlyFans is a subscription site that allows artists, performers and other content creators to monetize their creative works—and it has become a go-to platform for independent creators of adult content. The ban on sexually explicit content has been met by an outcry from many creators who have used the platform to safely earn an income in the adult industry.

This is just the latest example of censorship by financial intermediaries. Intermediaries have cut off access to financial services for independent booksellers, social networks, adult video websites, and whistleblower websites, regardless of whether those targeted were trading in First Amendment-protected speech. By cutting off these critical services, financial intermediaries force businesses to adhere to their moral and political standards.  

It is not surprising that, faced with the choice of losing access to financial services or banning explicit content, OnlyFans would choose its payment processors over its users. For many businesses, losing access to financial services seriously disrupts operations and may have existential consequences. 

As EFF has explained, access to the financial system is a necessary precondition for the operations of nearly every Internet intermediary, including content hosts and platforms. The structure of the electronic payment economy makes these payment systems a natural chokepoint for controlling online content. Indeed, in one case, a federal appeals court analogized shutting down financial services for a business to “killing a person by cutting off his oxygen supply.” In that case, Backpage.com, LLC v. Dart, the Seventh Circuit found that a sheriff had violated the First Amendment by strongly encouraging payment processors to cut off financial services to a classified advertising website.  

There has been some movement in Washington to fight financial censorship. Earlier this year, the Office of the Comptroller of the Currency finalized its Fair Access to Financial Services rule, which would have prevented banks from refusing to serve entire classes of customers they find politically or morally unsavory. But the rule was put on hold with the change of administrations in January.

Content moderation is a complex topic, and EFF has written about the implications of censorship by companies closer to the bottom of the technical stack. But content creators should not lose their financial lifelines based on the whims and moral standards of a few dominant and unaccountable financial institutions. 

ACLU Advocate Reining in Government Use of Face Surveillance, Champion of Privacy Rights Research, and Data Security Trainer Protecting Black Communities Named Recipients of EFF’s Pioneer Award

Mon, 08/23/2021 - 2:16pm
Virtual Ceremony September 16 Will Honor Kade Crockford, Pam Dixon, and Matt Mitchell

San Francisco—The Electronic Frontier Foundation (EFF) is honored to announce that Kade Crockford, Director of the Technology for Liberty Program at the ACLU of Massachusetts, Pam Dixon, executive director and founder of World Privacy Forum, and Matt Mitchell, founder of CryptoHarlem, are recipients of the 2021 Pioneer Award for their work, in the U.S. and across the globe, uncovering and challenging government and corporate surveillance on communities.

The awards will be presented at a virtual ceremony on September 16 starting at 5 pm PT. The keynote speakers this year will be science fiction authors Annalee Newitz and Charlie Jane Anders, hosts of the award-winning podcast “Our Opinions Are Correct.” The ceremony will stream live and free on Twitch, YouTube, Facebook, and Twitter. Audience members are encouraged to give a $10 suggested donation. EFF is supported by small donors around the world and you can become an official member at https://eff.org/PAC-join. To register for the ceremony: https://supporters.eff.org/civicrm/event/register?reset=1&id=318

Activist Kade Crockford is a leader in educating the public about and campaigning against mass electronic surveillance. At the ACLU of Massachusetts, they direct the Technology for Liberty Project, which focuses on ensuring that technology strengthens rights to free speech and expression and is not used to impede our civil liberties, especially privacy rights. Crockford focuses on how surveillance systems harm vulnerable populations targeted by law enforcement—people of color, Muslims, immigrants, and dissidents. Under Crockford’s leadership, the Technology for Liberty Project has used public record requests to shine a light on how state and local law enforcement agencies use technology to surveil communities. Crockford oversaw the filing of over 400 public record requests in 2019 and 2020 seeking information about the use of facial recognition across the state, collecting over 1,400 government documents. They led successful efforts in Massachusetts to organize local support for bans on government use of face surveillance, convincing local police chiefs that the technology endangered privacy in their communities. Crockford worked with seven Massachusetts cities to enact preemptive bans against the technology and, in June 2020, working with youth immigrants’ rights organizers, succeeded in getting facial recognition banned in Boston, the second largest city in the world to do so at the time. Massachusetts lawmakers have credited Crockford for shepherding efforts to pass a police reform bill that reins on how police in the state can use facial recognition. They also led a project to file public record requests with every Massachusetts District Attorney and the state Attorney General to reveal how local prosecutors were using administrative subpoena, secretly and with no judicial review or oversight, to obtain people’s cell phone and internet records. Kade has written for The NationThe GuardianThe Boston Globe, WBUR, and many other publications, and runs the dedicated privacy website www.PrivacySOS.org.

Author and researcher Pam Dixon has championed privacy for more than two decades and is a pioneer in examining, documenting, and analyzing how data is utilized in ways that impact multiple aspects of our lives, from finances and health information to identity, among other areas. Dixon founded the World Privacy Forum in 2003, a leading public interest group researching consumer privacy and data, with a focus on documenting and analyzing how individuals’ data interacts within complex data ecosystems and the consequences of those interactions. She has worked extensively on privacy and data governance in the U.S., EU, India, Africa, and Asia. Dixon worked in India for a year researching and publishing peer-reviewed research on India’s Aadhaar identity system, which was cited twice in the Supreme Court of India’s landmark Aadhaar decision. She works with the UN and WHO on data governance, and with OECD in its One AI Expert Group. She was named a global leader in digital identity, which included her work in Africa on identity ecosystems. She is co-chair of the Data for Development Workgroup at the Center for Global Development, where she is working to bring attention to inequities faced by less wealthy countries with fragile data infrastructures when dealing with data privacy standards created by, and reflecting the priorities of, wealthy countries. Dixon co-authored a report in 2021 calling for a more inclusive approach to data governance and privacy standards in low- and middle-income countries. Her ongoing work in the area of health privacy is extensive, including her work bringing medical identity theft to public attention for the first time, which led to the creation of new protections for patients. She has presented her work on privacy and complex data ecosystems to the Royal Society, and most recently to the National Academy of Sciences.

Matt Mitchell is the founder of CryptoHarlem and a tech fellow for the BUILD program at the Ford Foundation. He is recognized as a leading voice in protecting Black communities from surveillance. Under his leadership, CryptoHarlem provides workshops on digital surveillance and a space for Black people in Harlem, who are over policed and heavily surveilled, to learn about digital security, encryption, privacy, cryptology tools, and more. He is a well-known security researcher, operational security trainer, and data journalist whose work raising awareness about privacy, providing tools for digital security, and mobilizing people to turn information into action has broken new ground. His work, Mitchell says, is informed by the recognition that there’s a digital version of “stop and frisk” which can be more dangerous for people of color than the physical version, and that using social media has unique risks for the Black community, which is subject to many forms of street level and online surveillance. Cryptoharlem has worked with the Movement for Black Lives to create a guide for protestors, organizers, and activists during the 2020 protests against police brutality following the murder of George Floyd. Last year he was selected as a WIRED 25, a list of scientists, technologists, and artists working to make things better. In 2017 he was selected as a Vice Motherboard Human of The Year for his work protecting marginalized groups. As a technology fellow at the Ford Foundation, Mitchell develops digital security training, technical assistance offerings, and safety and security measures for the foundation’s grantee partners. Mitchell has also worked as an independent digital security/countersurveillance trainer for media and humanitarian-focused private security firms. His personal work focuses on marginalized, aggressively monitored, over-policed populations in the United States.  Previously, Mitchell worked as a data journalist at The New York Times and a developer at CNN, Time Inc, NewsOne/InteractiveOne/TVOne/RadioOne, AOL/Huffington Post, and Essence Magazine.

“EFF has been fighting mass surveillance since its founding 31 years ago, and we’ve seen the stakes rise as corporations, governments, and law enforcement increasingly use technology to gather personal information, pinpoint our locations, secretly track our online activities, and target marginalized communities,” said EFF Executive Director Cindy Cohn. “Our honorees are working across the globe and on the ground in local communities to defend online privacy and provide information, research, and training to empower people to defend themselves. Technology is a double-edged sword—it helps us build community, and can also be used to violate our rights to free speech and to freely associate with each other without government spying. We are honoring Kade Crockford, Pam Dixon, and Matt Mitchell for their vision and dedication to the idea that we can challenge and disrupt technology-enabled surveillance.”

Awarded every year since 1992, EFF’s Pioneer Award Ceremony recognize the leaders who are extending freedom and innovation on the electronic frontier. Previous honorees have included Malkia Cyril, William Gibson, danah boyd, Aaron Swartz, and Chelsea Manning.

For past Pioneer Award Winners:
https://www.eff.org/pioneer/past-winners

To register for this event:
https://supporters.eff.org/civicrm/event/register?reset=1&id=318

Contact:  KarenGulloAnalyst, Senior Media Relations Specialistkaren@eff.org

New Writing and Management Role on EFF's Fundraising Team

Fri, 08/20/2021 - 1:52pm

Calling all writers! If you are passionate about civil liberties and technology, we have an awesome opportunity for you. We are hiring for a newly-created role of Associate Director of Institutional Support. This senior role will manage the messaging and strategy behind EFF’s foundation grants and corporate support. It’s a chance to join the fun, fearless team that introduces funders to the work EFF does. The role supervises one direct report and will ideally work from our San Francisco office. 

EFF has amazing benefits and offers a flexible work environment. We also prioritize diversity of life experience and perspective, and intentionally seek applicants from a wide range of backgrounds. 

If you’re a storyteller and strategist who loves to roll up your sleeves on grant applications and you thrive in collaborative environments, this could be the perfect role for you.

If you’re interested, please apply today! We’re asking for applicants to get their applications in by September 4, 2021. Want to learn more about the role? Send questions to rainey@eff.org.

Click here to apply, and please help spread the word by sharing this role on social media. 

EFF Joins Global Coalition Asking Apple CEO Tim Cook to Stop Phone-Scanning

Thu, 08/19/2021 - 7:51pm

EFF has joined the Center for Democracy and Technology (CDT) and more than 90 other organizations to send a letter urging Apple CEO Tim Cook to stop the company’s plans to weaken privacy and security on Apple’s iPhones and other products.

SIGN THE PETITION

TELL APPLE: DON'T SCAN OUR PHONES

“Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material (CSAM), we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children,” the letter states. 

As we’ve explained in Deeplinks blog posts, Apple’s planned phone-scanning system opens the door to broader abuses. It decreases privacy for all iCloud photo users, and the parental notification system is a shift away from strong end-to-end encryption. It will tempt liberal democratic regimes to increase surveillance, and likely bring even great pressures from regimes that already have online censorship ensconced in law.

We’re proud to join with organizations around the world in opposing this change, including CDT, ACLU, PEN America, Access Now, Privacy International, Derechos Digitales, and many others. If you haven’t already, please sign EFF’s petition opposing Apple’s phone surveillance. 

SIGN THE PETITION

TELL APPLE: DON'T SCAN OUR PHONES

Further reading: 

Illinois Bought Invasive Phone Location Data From Banned Broker Safegraph

Thu, 08/19/2021 - 3:08pm

The Illinois Department of Transportation (IDOT) purchased access to precise geolocation data about over 40% of the state’s population from Safegraph, the controversial data broker recently banned from Google’s app store. The details of this transaction are described in publicly-available documents obtained by EFF

In an agreement signed in January 2019, IDOT paid $49,500 for access to two years’ worth of raw location data. The dataset consisted of over 50 million “pings” per day from over 5 million monthly-active users. Each data point contained precise latitude and longitude, a timestamp, a device type, and a so-called “anonymized” device identifier.

Excerpt from agreement describing data provided by Safegraph to IDOT

Taken together, these data points can easily be used to trace the precise movements of millions of identifiable people. Although Safegraph claimed its device identifiers were “anonymized,” in practice, location data traces are trivially easy to link to real-world identities.

In a response to a public records request, IDOT said that it did not store or process the data directly; instead, it hired contracting firm Resource Systems Group, Inc (RSG) to analyze the data on its behalf. The contracts with RSG and Safegraph are part of a larger effort by IDOT to create a “statewide travel demand model.” IDOT intends to use this model to analyze trends in travel across the state and project future growth.

An RSG slide summarizes the volume of data acquired from Safegraph

As smartphones have proliferated, governments around the country increasingly rely on granular location data derived from mobile apps. Federal law enforcement, military, and immigration agencies have garnered headlines for purchasing bulk phone app location data from companies like X-Mode and Venntel. But many other kinds of government agencies also patronize location data brokers, including the CDC, the Federal Highway Administration, and dozens of state and local transportation authorities. 

Safegraph discloses that it acquires location data from smartphone apps, other data brokers, and government agencies, but not which ones. Since it’s extremely difficult to determine which mobile applications transmit data to particular data brokers (and often impossible to know which data brokers sell data to each other), it is highly likely that the vast majority of users whom Safegraph tracks are unaware of their inclusion in its dataset.

“It is a lot of data”

IDOT filed an initial Procurement Justification seeking raw location data from smart phone apps in 2018. In the request, IDOT laid out the characteristics of the dataset it intended to buy. The agency specifically requested “disaggregate[d] (device-specific)” data from within Illinois and a “50 mile buffer of the state.” It wanted more than 1.3 million monthly active users, or at least 10% of the state’s population, with an average of 125 location pings per day from each user. IDOT also requested that the GPS pings be accurate to within 10 meters on average.

Safegraph’s dataset generally exceeded IDOT’s requirements. IDOT wanted to monitor at least 10% of the state’s population, and Safegraph offered 42%. Also, while IDOT only requested one month’s worth of data for $50,000, Safegraph offered two years of data for the same price: one year of historical data, plus one year of new data “updated at a regular cadence.” As a result, IDOT received received precise location traces for more than 5 million people, for two years, for less than a penny per person. On the other hand, Safegraph was only able to provide an average of 56 pings per day, less than the requested 125. But as the company assured the agency, that still represented over 50 million data points per day—to quote the agreement, “It is a lot of data.”

Excerpt from the January 2019 agreement explaining Safegraph’s dataset

Who is Safegraph?

Safegraph is led by Auren Hoffman, a veteran of the data broker industry. In 2006, he founded Rapleaf, a controversial company that aimed to quantify the reputation of users on platforms like Ebay by linking their online and offline activity into a single profile. Over time, Rapleaf evolved into a more traditional data broker. It was later acquired by TowerData, a company that sold behavioral and demographic data tied to email addresses. In 2102, Hoffman left to run Rapleaf spinoff Liveramp, an “identity resolution” and marketing data company that was bought by data broker titan Acxiom in 2014. In 2016, Hoffman departed Acxiom to found Safegraph.

Early on, Safegraph sold bulk access to raw geolocation data through its “Movement Panel” product. It collected data via third-party code embedded directly in apps, as well as from the “bidstream.” Gathering bidstream data is a controversial practice that involves harvesting personal information from billions of “bid requests” broadcast by ad networks during real-time bidding.

In 2019, Safegraph spun off a sister brand, Veraset. Since then, Safegraph has tried to present a marginally more privacy-conscious image on its own website: the company’s “products” page mainly lists services that aggregate data about places, not individual devices. Safegraph says it acquires much of its location data from Veraset, thus delegating the distasteful task of actually collecting the data to its smaller sibling. (The exact nature of the relationship between Safegraph and Veraset is unclear.) 

Meanwhile, Veraset appears to have inherited the main portion of Safegraph’s raw data-selling business, including the “Movement Data” product that IDOT purchased. Veraset sells bulk, precise location data about individual devices to governments, hedge funds, real-estate investors, advertisers, other data brokers, and more. On the data broker clearinghouse Datarade, Veraset boasts that it has “the largest, deepest, and most broadly available movement dataset” for the United States. It also offers samples of precise GPS traces tied to advertising IDs. Neither Safegraph nor Veraset disclose the sources of their data beyond vague categories like “mobile applications” and “data compilers”.

One of many IDOT data relationships

IDOT’s purchase from Safegraph was part of a larger project by the agency to model individuals’ transportation patterns. IDOT also worked with HERE Data LLC, another location data broker, and Replica, the company spun off of Google’s Sidewalk Labs. According to IDOT, HERE acquires location data primarily from vehicle navigation services. HERE is owned by a consortium of automakers including BMW, Volkswagen, and Mercedes, and gathers data from connected vehicles under those brands. Replica has been cagey about its data sources, but reports using “mobile location data” as well as “private” sources for real estate and credit transactions. 

As noted above, IDOT did not process the data directly. Instead, it shared the raw data with RSG, which was tasked with deriving useful insights for the transportation agency. A memo from RSG to IDOT, dated June 19, 2018, specifically requested that IDOT purchase bulk location data gathered from smartphone apps for RSG to analyze. RSG is a prolific consultant in transportation planning. Its website claims it has worked with “most” major transportation agencies in the U.S. and lists the Federal Highway Administration, the U.S. Department of Transportation, the NY Metropolitan Transportation Authority, the Florida Department of Transportation, and many others as clients.

A Toxic Pipeline

It is no comfort that IDOT did not acquire or process the raw data itself. Its payment to Safegraph normalizes and props up the dangerous market for phone app location data—harvested from millions of Illinois residents who never seriously considered that this sensitive data about them was being collected, aggregated, and shared.

This particular brand of data-sharing is a growing trend around the country. Data brokers vacuum granular locational data from users’ phones with no accountability, and state and local governments help them monetize it. In some cases, agencies mandate that tech companies share traffic data, as in the case of ride-sharing. In the last decade, this toxic pipeline has aligned government interests with data brokers’, and makes it less likely that those same governments will pass laws that crack down on the corporate exploitation of personal data. 

Federal laws (like the Fourth Amendment) and state laws (like California’s Electronic Communications Privacy Act) prevent governments from seizing sensitive personal information from personal devices or companies without a warrant. But many government agencies claim that no laws restrict them from purchasing that same data on the open market. We disagree: laws that protect our data privacy from government surveillance have no such “bill me later” exception from the warrant requirement. We expect courts will reject this governmental overreach (unless police evade judicial review by means of evidence laundering). In the meantime, we support legislation to ban such purchases, including the Fourth Amendment Is Not For Sale Act. We also urge app stores to kick out apps that harvest users’ location data—just as Google kicked out Safegraph.

When data flows from a broker to a government transportation agency, this greatly increases the likelihood of further data flow to law enforcement or immigration agencies. This sort of precise, identifiable location data needs far stronger protections at every level—whether in the hands of governments or private entities. But at the moment, third-party aggregators can and do sell their data to government agencies with near-zero accountability. 

IDOT and SafeGraph might argue that the agency is just obtaining traffic patterns. But the data used for these traffic patterns sheds light on all sorts of private activity—from attendance at a protest and trips to hospitals or churches to where you eat lunch and with whom. Even if it’s done for supposedly innocuous ends, the acquisition of large quantities of granular location data about people is too dangerous.

Agencies tempted to use big data about real people should acquire the minimum information necessary to accomplish their goals. Governments must demand detailed information on the provenance of any personal data that they handle, and refuse to do business with companies like Safegraph that buy, sell, or aggregate sensitive phone app location data from users who have not provided real consent to its collection. The interlocking industries of ad tech and data brokers are responsible for rampant privacy harms, and civic governments must not “green wash” these harms in the name of energy efficiency or transportation planning. As a society, we need safeguards in place to ensure that partnerships between tech and government do not cost us more than we gain.

How LGBTQ+ Content is Censored Under the Guise of "Sexually Explicit"

Wed, 08/18/2021 - 2:51pm

The latest news from Apple—that the company will open up a backdoor in its efforts to combat child sexual abuse imagery (CSAM)—has us rightly concerned about the privacy impacts of such a decision.

As always, some groups will be subject to potentially more harm than others. One of the features of Apple’s new plan is designed to provide notifications to minor iPhone users who are enrolled in a Family Plan when they either receive or attempt to send a photo via iMessage that Apple’s machine learning classifier defines as “sexually explicit.” If the minor child is under 13 years of age and chooses to send or receive the content, their parent will be notified and the image saved to the parental controls section of their phone for the parent to view later. Children between 13-17 will also receive a warning, but the parent will not be notified.

While this feature is intended to protect children from abuse, Apple doesn’t seem to have considered the ways in which it could enable abuse. This new feature assumes that parents are benevolent protectors, but for many children, that isn't the case: parents can also be the abuser, or may have more traditional or restrictive ideas of acceptable exploration than their children. While it's understandable to want to protect children from abuse, using machine learning classifiers to decide what is or is not sexual in nature may very well result in children being shamed or discouraged from seeking out information about their sexuality.

As Apple’s product FAQ explains, the feature will use on-device machine learning to determine which content is sexually explicit—machine learning that is proprietary and not open to public or even civil society review.

The trouble with this is that there’s a long history of non-sexual content—and particularly, LGBTQ+ content—being classified by machine learning algorithms (as well as human moderators) as “sexually explicit.” As Kendra Albert and Afsaneh Rigot pointed out in a recent piece for Wired, "Attempts to limit sexually explicit speech tend to (accidentally or on purpose) harm LGBTQ people more."

From filtering software company Netsweeper to Google News, Tumblr, YouTube and PayPal, tech companies don’t have a good track record when it comes to differentiating between pornography and art, educational, or community-oriented content. A recent paper from scholar Ari Ezra Waldman demonstrates this, arguing that "content moderation for 'sexual activity' is an assemblage of social forces that resembles oppressive anti-vice campaigns from the middle of the last century in which 'disorderly conduct', 'vagrancy', 'lewdness', and other vague morality statutes were disproportionately enforced against queer behavior in public."

On top of that, Apple itself has a history in over-defining “obscenity.” Apple TV has limited content for being too “adult,” and its App Store has placed prohibitions on sexual content—as well as on gay hookup and dating apps in certain markets, such as China, Saudi Arabia, the United Arab Emirates, and Turkey

Thus far, Apple says that their new feature is limited to “sexually explicit” content, but as these examples show, that’s a broad area that—without clear parameters—can easily catch important content in the net.

Right now, Apple’s intention is to roll out this feature only in the U.S.—which is good, at least, because different countries and cultures have highly different beliefs around what is and is not sexually explicit. 

But even in the U.S., no company is going to satisfy everyone when it comes to defining, via an algorithm, what photos are sexually explicit. Are breast cancer awareness images sexually explicit? Facebook has said so in the past. Are shirtless photos of trans men who’ve had top surgery sexually explicit? Instagram isn’t sure. Is a photo documenting sexual or physical violence or abuse sexually explicit? In some cases like these, the answers aren’t clear, and Apple wading into the debate, and tattling on children who may share or receive the images, will likely only produce more frustration, and more confusion. 

Jewel v. NSA: Americans (Still) Deserve Their Day in Court

Tue, 08/17/2021 - 6:39pm

With little explanation, the Ninth Circuit today affirmed the district court’s decision dismissing our landmark challenge to the US government’s mass communications surveillance, Jewel v. NSA. Needless to say, we are extremely disappointed.  Today’s decision renders government mass surveillance programs essentially unreviewable by U.S. courts, since no individual will be able to prove with the certainty the Ninth Circuit required that they were particularly spied upon.  This hurdle is insurmountable, especially when such programs are shrouded in secrecy, and the procedures for confronting that secrecy are disregarded by the courts.

Though we filed our our landmark Jewel v. NSA case in 2008, no court has yet ruled on the merits – whether the mass spying on the Internet and phone communications of millions of Americans violates U.S. constitutional and statutory law. Instead, despite the enormous amount of direct and circumstantial evidence showing our clients’ communications swept up by the NSA dragnet surveillance, along with those of millions of other Americans, the trial and appeals courts still found that the plaintiffs lacked legal “standing” to challenge the practices.

As we said in our brief to the Ninth Circuit, this dismissal “hands the keys to the courthouse to the Executive, making it impossible to bring any litigation challenging the legality of such surveillance without the Executive’s permission.  It blinds the courts to what the Executive has admitted: the NSA has engaged in mass surveillance of domestic communications carried by the nation’s leading telecommunications companies, and this surveillance touches the communications and records of millions of innocent Americans.”

This fight has been long and hard. But we remain determined to ensure that the network we all increasingly rely on in our daily lives—for communicating with our families, working, participating in community and political activities, shopping, and browsing—is not also an instrument subjecting all of our actions to NSA mass surveillance. We are evaluating the options for moving the case forward so that Americans can indeed have their day in court.

Related Cases: Jewel v. NSA

Speak Out Against Apple’s Mass Surveillance Plans

Tue, 08/17/2021 - 3:51pm

Mass surveillance is not an acceptable crime-fighting strategy, no matter how well-intentioned the spying. If you’re upset about Apple’s recent announcement that the next version of iOS will install surveillance software in every iPhone, we need you to speak out about it.

SIGN THE PETITION

Tell Apple: Don't Scan Our Phones

Last year, EFF supporters spoke out and stopped the EARN IT bill, a government scheme that could have enabled the scanning of every message online. We need to harness that same energy to let Apple know that its plan to enable the scanning of photos on every iPhone is unacceptable. 

Apple plans to install two scanning systems on all of its phones. One system will scan photos uploaded to iCloud and compare them to a database of child abuse images maintained by various entities, including the National Center for Missing and Exploited Children (NCMEC), a quasi-governmental agency created by Congress to help law enforcement investigate crimes against children. The other system, which operates when parents opt into it, will examine iMessages sent by minors and compare them to an algorithm that looks for any type of “sexually explicit” material. If an explicit image is detected, the phone will notify either the user and possibly the user’s parent, depending on age.

These combined systems are a danger to our privacy and security. The iPhone scanning harms privacy for all iCloud photo users, continuously scanning user photos to compare them to a secret government-created database of child abuse images. The parental notification scanner uses on-device machine learning to scan messages, then informs a third party, which breaks the promise of end-to-end encryption.  

Apple’s surveillance plans don’t account for abusive parents, much less authoritarian governments that will push to expand it. Don’t let Apple betray its users.

SIGN THE PETITION

Tell Apple: Don't Scan Our Phones

Further Reading: 

Facebook’s Attack on Research is Everyone's Problem

Thu, 08/12/2021 - 7:20pm

Facebook recently banned the accounts of several New York University (NYU) researchers who run Ad Observer, an accountability project that tracks paid disinformation, from its platform. This has major implications: not just for transparency, but for user autonomy and the fight for interoperable software.

Ad Observer is a free/open source browser extension used to collect Facebook ads for independent scrutiny. Facebook has long opposed the project, but its latest decision to attack Laura Edelson and her team is a powerful new blow to transparency. Worse, Facebook has spun this bullying as  defending user privacy. This “privacywashing” is a dangerous practice that muddies the waters about where real privacy threats come from. And to make matters worse, the company has been gilding such excuses with legally indefensible claims about the enforceability of its terms of service. 

Taken as a whole, Facebook’s sordid war on Ad Observer and accountability is a perfect illustration of how the company warps the narrative around user rights. Facebook is framing the conflict as one between transparency and privacy, implying that a user’s choice to share information about their own experience on the platform is an unacceptable security risk. This is disingenuous and wrong. 

This story is a parable about the need for data autonomy, protection, and transparency—and how Competitive Compatibility (AKA “comcom” or “adversarial interoperability”) should play a role in securing them.

What is Ad Observer?

Facebook’s ad-targeting tools are the heart of its business, yet for users on the platform they are shrouded in secrecy. Facebook collects information on users from a vast and growing array of sources, then categorizes each user with hundreds or thousands of tags based on their perceived interests or lifestyle. The company then sells the ability to use these categories to reach users through micro-targeted ads. User categories can be weirdly specific, cover sensitive interests, and be used in discriminatory ways, yet according to a 2019 Pew survey 74% of users weren’t even aware these categories exist.

To unveil how political ads use this system, ProPublica launched its Political Ad Collector project in 2017. Anyone could participate by installing a browser extension called “Ad Observer,” which copies (or “scrapes”) the ads they see along with information provided under each ad’s “Why am I seeing this ad?” link. The tool then submits this information to researchers behind the project, which as of last year was NYU Engineering’s Cybersecurity for Democracy.

The extension never included any personally identifying information—simply data about how advertisers target users. In aggregate, however, the information shared by thousands of Ad Observer users revealed how advertisers use the platform’s surveillance-based ad targeting tools. 

This improved transparency is important to better understand how misinformation spreads online, and Facebook’s own practices for addressing it. While Facebook claims it “do[es]n’t allow misinformation in [its] ads”, it has been hesitant to block false political ads, and it continues to provide tools that enable fringe interests to shape public debate and scam users. For example, two groups were found to be funding the majority of antivaccine ads on the platform in 2019. More recently, the U.S. Surgeon General spoke out on the platform’s role in misinformation during the COVID-19 pandemic—and just this week Facebook stopped a Russian advertising agency from using the platform to spread misinformation about COVID-19 vaccines. Everyone from oil and gas companies to political campaigns has used Facebook to push their own twisted narratives and erode public discourse.

Revealing the secrets behind this surveillance-based ecosystem to public scrutiny is the first step in reclaiming our public discourse. Content moderation at scale is notoriously difficult, and it’s unsurprising that Facebook has failed again and again. But given the right tools, researchers, journalists, and members of the public can monitor ads themselves to shed light on misinformation campaigns. Just in the past year Ad Observer has yielded important insights, including how political campaigns and major corporations buy the right to propagate misinformation on the platform.

Facebook does maintain its own “Ad Library” and research portal. The former has been unreliable and difficult to use without offering information about targeting based on user categories; the latter comes swathed in secrecy and requires researchers to allow Facebook to suppress their findings. Facebook’s attacks on the NYU research team speak volumes about the company’s real “privacy” priority: defending the secrecy of its paying customers—the shadowy operators pouring millions into paid disinformation campaigns.

This isn’t the first time Facebook has attempted to crush the Ad Observer project. In January 2019, Facebook made critical changes to the way its website works, temporarily preventing Ad Observer and other tools from gathering data about how ads are targeted. Then, on the eve of the hotly contested 2020 U.S. national elections, Facebook sent a dire legal threat to the NYU researchers, demanding the project cease operation and delete all collected data. Facebook took the position that any data collection through “automated means” (like web scraping) is against the site's terms of service. But hidden behind the jargon is the simple truth that “scraping” is no different than a user copying and pasting. Automation here is just a matter of convenience, with no unique or additional information being revealed. Any data collected by a browser plugin is already, rightfully, available to the user of the browser. The only potential issue with plugins “scraping” data is if it happens without a user’s consent, which has never been the case with Ad Observer. 

Another issue EFF emphasized at the time is that Facebook has a history of dubious legal claims that such violations of service terms are violations of the Computer Fraud and Abuse Act (CFAA). That is, if you copy and paste content from any of the company’s services in an automated way (without its blessing), Facebook thinks you are committing a federal crime. If this outrageous interpretation of the law were to hold, it would have a debilitating impact on the efforts of journalists, researchers, archivists, and everyday users. Fortunately, a recent U.S. Supreme Court decision dealt a blow to this interpretation of the CFAA.  

Last time around, Facebook’s attack on Ad Observer generated enough public backlash that it seemed Facebook was going to do the sensible thing and back down from its fight with the researchers. Last week however, it turned out that this was not the case.

Facebook’s Bogus Justifications 

Facebook’s Product Management Director, Mike Clark, published a blog post defending the company’s decision to ban the NYU researchers from the platform. Clark’s message mirrored the rationale offered back in October by then-Advertising Integrity Chair Rob Leathern (who has since left for Google). These company spokespeople have made misleading claims about the privacy risk that Ad Observer posed, and then used these smears to accuse the NYU team of violating Facebook users’ privacy. The only thing that was being “violated” was Facebook’s secrecy, which allowed it to make claims about fighting paid disinformation without subjecting them to public scrutiny. 

Secrecy is not privacy. A secret is something no one else knows. Privacy is when you get to decide who knows information about you. Since Ad Observer users made an informed choice to share the information about the ads Facebook showed them, the project is perfectly compatible with privacy. In fact, the project exemplifies how to do selective data sharing for public interest reasons in a way that respects user consent.

It’s clear that Ad Observer poses no privacy risks to its users. Information about the extension is available in an FAQ and privacy policy, both of which accurately and comprehensively describe how the tool worked. Mozilla thoroughly reviewed the extension’s open source code independently before recommending it to users. That’s something Facebook itself could have done, if it was genuinely worried about what information the plugin was gathering.

In Clark’s post defending Facebook’s war on accountability, he claimed that the company had no choice but to shut down Ad Observer, thanks to a “consent decree” with the Federal Trade Commission (FTC). This order imposed after the Cambridge Analytica scandal requires the company to strictly monitor third-party apps on the platform. This excuse was obviously not true, as a casual reading of the consent decree makes clear. If there was any doubt, it was erased when the FTC's acting director of the Bureau of Consumer Protection, Sam Levine, published an open letter to Mark Zuckerberg calling this invocation of the consent decree “misleading,” adding that nothing in the FTC’s order bars Facebook from permitting good-faith research. Levine added, "[W]e hope that the company is not invoking privacy – much less the FTC consent order – as a pretext to advance other aims." This shamed Facebook into a humiliating climbdown in which it admitted that the consent decree did not force them to disable the researchers' accounts.

Facebook’s anti-Ad Observer spin relies on both overt and implicit tactics of deception. It’s not just false claims about FTC orders—there’s also subtler work, like publishing a blog post about the affair entitled “Research Cannot Be the Justification for Compromising People’s Privacy,” which invoked the infamous Cambridge Analytica scandal of 2018. This seeks to muddy any distinction between the actions of a sleazy for-profit disinformation outfit with those of a scrappy band of academic transparency researchers.

Let’s be clear; Cambridge Analytica is nothing like Ad Observer. Cambridge Analytica did its dirty work by deceiving users, tricking them into using a “personality quiz” app that siphoned away both their personal data and that of their Facebook “friends,” using a feature provided by the Facebook API. This information was packaged and sold to political campaigns as a devastating, AI-powered, Big Data mind-control ray, and saw extensive use in the 2016 US presidential election. Cambridge Analytica gathered this data and attempted to weaponize it by using Facebook's own developer tools (tools that were already known to leak data), without meaningful user consent and with no public scrutiny. The slimy practices of the Cambridge Analytica firm bear absolutely no resemblance to the efforts of the NYU researchers, who have prioritized consent and transparency in all aspects of their project.

An Innovation-Killing Pretext

Facebook has shown that it can’t be trusted to present the facts about Ad Observer in good faith. The company has conflated Cambridge Analytica’s deceptive tactics with NYU’s public interest research; it’s conflated violating its terms of service with violating federal cybersecurity law; and it’s conflated the privacy of its users with secrecy for its paying advertisers. 

Mark Zuckerberg has claimed he supports an “information fiduciary” relationship with users. This is the idea that companies should be obligated to protect the user information they collect. That would be great, but not all fiduciaries are equal. A sound information fiduciary system would safeguard users’ true control over how they share this information in the first place. For Facebook to be a true information fiduciary, it would have to protect users from unnecessary data collection by first parties like Facebook itself. Instead, Facebook says it has a duty to protect user data from the users themselves.

Even some Facebookers are disappointed with their company’s secrecy and anti-accountability measures. According to a New York Times report, there’s a raging internal debate about transparency after Facebook dismantled the team responsible for its content tracing tool CrowdTangle. According to interviewees, there’s a sizable internal faction at Facebook that sees the value of sharing how the platform operates (warts and all), and a cadre of senior execs who want to bury this information. (Facebook disputes this.) Combine this with Facebook’s attack on public research, and you get a picture of a company that wants to burnish its reputation by hiding its sins from the billions of people who rely on it under the guise of privacy, instead of owning those mistakes and making amends for them. 

Facebook’s reputation-laundering spills out into its relationship with app developers. The company routinely uses privacy-washing as a pretext to kill external projects, a practice so widespread it’s got a special name: “getting Sherlocked.” Last year EFF weighed in on another case where Facebook abused the CFAA to demand that the “Friendly browser” cease operation. Friendly allows users to control the appearance of Facebook while they use it, and doesn’t collect any user data or make use of Facebook’s API. Nevertheless, the company sent dire legal threats to its developers, which EFF countered in a letter that demolished the company’s legal claims. This pattern played out again recently with the open source Instagram app Barinsta, which received a cease and desist notice from the company.

When developers go against Facebook, the company uses all of its leverage as a platform to respond full tilt. Facebook doesn’t just kill your competing project: they deplatform you, burden you with legal threats, and brick any of your hardware that requires a Facebook login

What to do

Facebook is facing a vast amount of public backlash (again!). Several U.S. senators sent Zuckerberg a letter asking him to clarify the company’s actions. Over 200 academics signed a letter in solidarity with Laura Edelson and the other banned researchers. One simple remedy is clearly necessary: Facebook must reinstate all of the accounts of the NYU research team. Management should also listen to the workers at Facebook calling for greater transparency, and furthermore cease all CFAA legal threats to not just researchers, but anyone accessing their own information in an automated way.

This Ad Observer saga provides even more evidence that users cannot trust Facebook to act as an impartial and publicly accountable platform on its own. That’s why we need tools to take that choice out of Facebook’s hands. Ad Observer is a prime example of competitive compatibility—grassroots interoperability without permission. To prevent further misuse of the CFAA to shut down interoperability, courts and legislators must make it clear that anti-hacking laws don’t apply to competitive compatibility. Furthermore, platforms as big as Facebook should be obligated to loosen their grip on user information, and open up automated access to basic, useful data that users and competitors need. Legislation like the ACCESS Act would do just that, which is why we need to make sure it delivers

We need the ability to alter Facebook to suit our needs, even when Facebook’s management and shareholders try to stand in the way.

Pages