EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 36 min 5 sec ago

Party Like It’s 1979: The OG Antitrust Is Back, Baby!

Thu, 08/12/2021 - 6:33pm

President Biden’s July 9 Executive Order on Promoting Competition in the American Economy is a highly technical, 72-part, fine-grained memo on how to address the ways market concentration harms our lives as workers, citizens, consumers, and beyond. 

To a casual reader, this may seem like a dry bit of industrial policy, but woven into the new order is a revolutionary idea that has rocked the antitrust world to its very foundations.

The Paradox of Antitrust

US antitrust law has three pillars: the Sherman Act (1890), the Clayton Act (1914), and the FTC Act (1914). Beyond their legal text, these laws have a rich context, including the transcripts of the debates that the bills’ sponsors participated in, explaining why the bills were written. They arose as a response to the industrial conglomerates of the Gilded Age, and their “robber baron” leaders, whose control over huge segments of the economy gave them a frightening amount of power.

Despite this clarity of intent, the True Purpose of Antitrust has been hotly contested in US history. For much of that history, including the seminal breakup of John D. Rockefeller’s Standard Oil in 1911, the ruling antitrust theory was “harmful dominance.” That’s the idea that companies that dominate an industry are potentially dangerous merely because they are dominant. With dominance comes the ability to impose corporate will on workers, suppliers, other industries, people who live near factories, even politicians and regulators.

The election of Ronald Reagan in 1980 saw the rise of a new antitrust theory, based on “consumer welfare.” Consumer welfare advocates argue that monopolies can be efficient, able to deliver better products at lower prices to consumers, and therefore the government does us all a disservice when it indiscriminately takes on monopolies. 

Consumer welfare’s standard-bearer was Judge Robert Bork, who served as Solicitor General in the Nixon administration. Bork was part of the conservative Chicago School of economics, and wrote a seminal work called “The Antitrust Paradox.”

The Antitrust Paradox went beyond arguing that consumer welfare was a better way to do antitrust than harmful dominance. In his book, Bork offers a kind of secret history of American antitrust, arguing that consumer welfare had always been the intention of America’s antitrust laws, and that we’d all been misled by the text of these laws, the debates surrounding their passage, and other obvious ways of interpreting Congress’s intent. 

Bork argued the true goal of antitrust was protecting us as consumers—not as citizens, or workers, or human beings. As consumers, we want better goods and lower prices. So long as a company used its market power to make better products at lower prices, Bork’s theories insisted that the government should butt out.

This is the theory that prevailed for the ensuing 40 years. It spread from economic circles to the government to the judiciary. It got a tailwind thanks to a well-funded campaign that included a hugely successful series of summer seminars attended by 40 percent of federal judges, whose rulings were measurably impacted by the program.

Morning in America

Everyone likes lower prices and better products, but all of us also have interests beyond narrow consumer issues. We live our days as parents, spouses, friends—not just as shoppers. We are workers, or small business owners. We care about our environment and about justice and equity. We want a say in how our world works.

Competition matters, but not just because it can make prices lower or products better. Competition matters because it lets us exercise self-determination. Market concentration means that choices about our culture, our built environment, our workplaces, and our climate are gathered into ever-fewer hands. Businesses with billions of users and dollars get to make unilateral decisions about our lives. The larger a business looms in our life, the more ways it can hurt us

The idea that our governments need to regulate companies beyond the narrow confines of “consumer welfare” never died, and now, 40 years on, it’s coming roaring back.

The FTC’s new chair, Lina Khan, burst upon the antitrust scene in 2017, when, as a Yale Law student, she published Amazon’s Antitrust Paradox, a devastating rebuke to Bork’s Antitrust Paradox, demonstrating how a focus on consumer welfare fails to deliver, even on its own terms. Khan is now one of the nation’s leading antitrust enforcers, along with fellow “consumer welfare” skeptics like Jonathan Kanter (now helming the Department of Justice Antitrust Division) and Tim Wu (the White House’s special assistant to the president for technology and competition policy).

Bombshells in the Fine Print

The Biden antitrust order is full of fine detail; it’s clear that the president’s advisors dug deep into competition issues with public interest groups across a wide variety of subjects. We love to nerd out on esoteric points of competition law as much as the next person, and we like a lot of what this memo says about tech and competition, but even more exciting is the big picture stuff.

When the memo charges the FTC with policing corporate concentration to prevent abuses to “consumer autonomy and consumer privacy,” that’s not just a reassurance that this administration is paying attention to some of our top priorities. It’s a bombshell, because it links antitrust to concerns beyond ensuring that prices stay low. 

Decades of consumer welfarism turned the electronic frontier into a monoculture dominated by “a group of five websites, each consisting of screenshots of text from the other four.” This isn’t the internet we signed up for. That’s finally changing.

We get it, this is esoteric, technical stuff. But if there’s one thing we’ve learned in 30 years’ fighting for a better digital future, it’s that all the important stuff starts out as dull, technical esoterica. From DRM to digital privacy, bossware to broadband, our issues too often rise to the level of broad concern once they’ve grown so harmful that everyone has to pay attention to them.

We are living through a profound shift in the framework that determines what kinds of companies are allowed to exist and what they’re allowed to do. It’s a shift for the better. We know nothing is assured. The future won’t fix itself. But this is an opportunity, and we’re delighted to seize it.

A New Bill Would Protect Indie Video Game Developers and App Developers

Thu, 08/12/2021 - 3:09pm

Congress’s recent efforts on antitrust and competition in the tech space have been focused on today’s biggest tech companies, not on setting policy for the sector as a whole. Although Google, Apple, Facebook, and Amazon (and perhaps Microsoft) are the largest companies and therefore the ones generating the bulk of problems, they are not the only tech companies who may be abusing their dominance in a market. Focusing on only those companies threatens to make any gains in competition policy temporary, as happened with the telecom industry. But new legislation introduced by Senators Blumenthal, Blackburn, and Klobuchar takes a broader view, proposing industry-wide changes to app markets that will improve the landscape for independent developers and their customers.

The Open App Markets Act sets out a platform competition policy that embodies a few basic ideas: the owner of an app store should not be allowed to control the prices that app developers can set on other platforms, or to prevent independent developers from communicating with their customers about discounts and other incentives. App store owners should not be able to require developers to use the store owner’s own in-app payment systems. And app store owners who also control the operating system they run on won't be allowed to restrict customers from using alternative app stores.

Importantly, the bill would cover app stores with 50 million or more US users, which includes not just the Apple and Google app stores but also the largest online game stores.

The high-profile case of Epic Games vs Apple has drawn attention to practices such as Apple’s 30% commission on app sales and in-app purchases and its gag rule on advertising lower prices out of the App Store, but Apple is not alone here. Valve, the owners of the Steam platform for PC gaming, has been accused of similar practices in an ongoing antitrust lawsuit by Wolfire Games and a group of Steam users.

Valve Leverages Its Dominance in PC Gaming Against Users and Independent Developers

The video game market has a vibrant independent developer space, but challenges abound for these smaller developers. In order to be successful as a game developer, you have to make something new and interesting to gamers. An innovative new approach to a classic game genre, or a hybrid of game types into something new, can yield new and great games. As a result, there is much less pressure towards mergers and acquisitions in this market in the same way we see in other areas of the technology sector because gamers move on to the next new game, rather than remaining tethered to older games. Start selling bad games, and customers move on to a new company.

But innovation in games depends on some core factors. Developers need a way to access as wide a base of customers as possible, and they need profits from products they produce to keep producing more. When a platform has effectively captured the audience, it can control the profits of the developer in ways that hinder future development while keeping costs above market rates. That is basically the issue with Valve’s Steam today, where Valve enjoys 30% of all revenues generated from sales on its platform while also being used by a supermajority of PC gaming customers.

When Wolfire attempted to sell its own games at a lower cost off of Steam’s platform, Valve told them that they would lose access to the Steam market, effectively telling them that independent developers on Steam are not allowed to offer lower prices elsewhere. But losing access to the core audience on Steam would effectively mean losing the business, and thus Valve is able to use its market power to dictate how games are sold, and at what price. This is a classic monopoly problem that the Open App Markets Act addresses.

The Open App Markets Act Prohibits Platforms from Leveraging Their Dominance Against Independent Developers

A rarity in DC, the Open App Markets Act is only 5 pages long and sets forth easy-to-understand rules designed to promote independent developers. The legislation prohibits covered app stores from controlling independent developers’ ability to communicate with their audiences on the platform about business offers such as a discount or a new means of purchasing a game. It does not prevent a platform from charging a commission for sales or dictate what rates they can charge, leaving that to the competitive process. So Valve’s Steam can enjoy the revenues it collects from developers who use its platform, but it can’t control their ability to sell games through other channels on whatever terms they want.

This has significant relevance should the independent developer make it big. Think of games that skyrocket up to the top, like Valheim’s meteoric rise to 5 million customers while still in the early access phase, or PlayerUnknown Battlegrounds’ (aka PUBG) rise to 13 million copies sold, also during early access. The early access phase is critical to developers needing a revenue infusion to refine and improve their games further until full release. Steam gets its payday from those early sales and the developers benefit from the platform’s audience size. But once the game developer reaches a point where they can simply exist as their own having built a customer base, Valve should have no power to control developers’ pricing. Under the Act, should a developer decide to offer their products at a much lower price than found on Steam, Valve would be prohibited from stopping them, and prices for games will come down without being dependent on the Steam sale.

The Act enforces this new competition policy by empowering the Federal Trade Commission and state attorneys general to bring enforcement lawsuits, and most importantly, also giving independent developers a right to sue a platform for injuries caused by a violation of the new law. The combination of these enforcement mechanisms means a platform would be on notice to avoid conduct that interferes with the business decisions of the developers. It could go a long way toward solving the problems we’re seeing today in both the Apple and Valve stores and many others.

This broadly applicable bill could create positive benefits for independent developers because it will change the behavior of all platforms that carry sufficiently large audiences, which are attractive to developers. More importantly, new competition policy in place would change platforms to focus more on the audiences they can offer developers while removing the incentive to control business decisions of those developers to preserve commission revenues.

Why Data-Sharing Mandates Are the Wrong Way To Regulate Tech

Thu, 08/12/2021 - 2:50pm

The tech companies behind the so-called “sharing economy” have drawn the ire of brick-and-mortar businesses and local governments across the country.

For example, take-out apps such as GrubHub and UberEats have grown into a hundred-billion-dollar industry over the past decade, and received a further boost as many sit-down restaurants converted to only take-out during the pandemic. Small businesses are upset, in part, that these companies are collecting and monetizing data about their customers.

Likewise, ride-sharing services have decimated the highly-regulated taxi industry, replacing it with a larger, more nebulous fleet of personal vehicles carrying passengers around major cities. This makes them harder to regulate and plan around than traditional taxis. Alarmed municipal transportation agencies feel that they do not have the tools they need to monitor and manage ride-sharing.

A common thread runs through these emerging industries: massive volumes of sensitive personal data. Yelp, Grubhub, Uber, Lyft, and many more new companies have inserted themselves in between customers and older, smaller businesses, or have replaced those businesses entirely. The new generation of tech companies collect more data about their users than traditional businesses ever did. A restaurant might know its regular customers, or keep track of its best-selling dishes, but Grubhub can track each user’s searches, devices, and meals at restaurants across the city. Likewise, while traditional taxi services may have logged trip times, origins, and destinations, Uber and Lyft can link each trip to a user’s real-world identity and track supply and demand in real time.

This data is attractive for several reasons. It can be monetized through targeted ads or sold directly to data brokers, and it gives larger companies a competitive advantage over their smaller, less information-hungry peers. It allows tech companies to observe market trends, informing decisions about pricing, worker pay, and whom to buy out next. Sharing-economy corporations have every incentive to collect as much data as possible, and few legal restrictions on doing so. As a result, our interactions with everyday services like restaurants are tracked more closely than ever before.

Legislators want to force tech companies to share data

Several bills in states around the country, including in California and New York City, propose a “solution”: force the tech companies to share some of the data they collect. But these bills are misguided. While they might give small businesses short-term boons, they won’t address the larger systems that have led to corporate concentration in the tech sector. They will further encourage the commoditization of our data as a tool for businesses to battle each other, with user privacy caught in the crossfire.

Normalizing new, indiscriminate data sharing is a problem. Instead, regulators should be thinking of ways to protect consumers by limiting data collection, retention, use, and sharing. Creating new mandates to share data simply puts it in the hands of more businesses. This opens up more ways for government seizure of that data and more targets for hackers. 

We’ve sung the praises of interoperability policy in the past, so how is this different? After all, if Facebook should have to share data with its competitors under something like the ACCESS Act, why shouldn’t UberEats have to share data with restaurants? The difference is who’s in control. Good interoperability policy should put the user front and center: data sharing must only happen with a user’s opt-in consent, and only for purposes that directly benefit the user.

Forcing DoorDash to share information with restaurants, or Uber to share data with cities, doesn’t serve users in any way. And these bills don’t require a user’s opt-in consent for the processing of their data. Instead, these policies would make it so that sharing data with one company means that data will automatically end up in the hands of several downstream parties. Since the United States lacks basic consumer privacy laws, recipients of this data will be free to sell it, otherwise monetize it, or share it with law enforcement or immigration officials. This further erodes what little agency users currently have. 

Regulation should aim to protect user rights

The collection and use of personal data by tech companies is a real problem. And big companies wield their data troves as weapons to beat back competitors. But we should address those problems directly: first, with strong privacy laws governing how businesses process our data; and second, with better antitrust enforcement that puts a stop to harmful conglomeration and anticompetitive behavior.

It’s also okay for regulators to monitor and manage ride-sharing and other services that impact the public by requiring reasonable amounts of aggregated and deidentified data. Uber and Lyft have a well-documented history of deliberately misleading local authorities in order to skirt laws. However, any data-sharing requirements must be limited in scope, and minimize the risks to individual users and their data. For example, rules should carefully consider how much information is actually necessary to achieve specific governmental goals. Often, such information need not be highly granular. Whether the government or a private company is holding information, reidentification is always a real concern—by city transportation agencies, law enforcement, ICE, or any other third parties that purchase or steal data.

Despite what aspiring government contractors may say, agencies should not collect huge amounts of individualized data up front, then figure out what to do with it later. The way to fix bad actors in tech is not to increase non-consensual data sharing—nor to have governments mimic bad actors in tech.

It’s Time for Google to Resist Geofence Warrants and to Stand Up for Its Affected Users

Thu, 08/12/2021 - 1:31pm

EFF would like to thank former intern Haley Amster for drafting this post, and former legal fellow Nathan Sobel for his assistance in editing it.

The Fourth Amendment requires authorities to target search warrants at particular places or things—like a home, a bank deposit box, or a cell phone—and only when there is reason to believe that evidence of a crime will be found there. The Constitution’s drafters put in place these essential limits on government power after suffering under British searches called “general warrants” that gave authorities unlimited discretion to search nearly everyone and everything for evidence of a crime.

Yet today, Google is facilitating the digital equivalent of those colonial-era general warrants. Through the use of geofence warrants (also known as reverse location warrants), federal and state law enforcement officers are routinely requesting that Google search users’ accounts to determine who was in a certain geographic area at a particular time—and then to track individuals outside of that initially specific area and time period.

These warrants are anathema to the Fourth Amendment’s core guarantee largely because, by design, they sweep up people wholly unconnected to the crime under investigation.

For example, in 2020 Florida police obtained a geofence warrant in a burglary investigation that led them to suspect a man who frequently rode his bicycle in the area. Google collected the man’s location history when he used an app on his smartphone to track his rides, a scenario that ultimately led police to suspect him of the crime even though he was innocent.

Google is the linchpin in this unconstitutional scheme. Authorities send Google geofence warrants precisely because Google’s devices, operating system, apps, and other products allow it to collect data from millions of users and to catalog these users’ locations, movements, associations, and other private details of their lives.

Although Google has sometimes pushed back in court on the breadth of some of these warrants, it has largely acquiesced to law enforcement demands—and the number of geofence warrants law enforcement sends to the company has dramatically increased in recent years. This stands in contrast to documented instances of other companies resisting law enforcement requests for user data on Fourth Amendment grounds.

It’s past time for Google to stand up for its users’ privacy and to resist these unlawful warrants. A growing coalition of civil rights and other organizations, led by the Surveillance Technology and Oversight Project, have previously called on Google to do so. We join that coalition’s call for change and further demand that Google:

  • Resist complying with geofence warrants
  • Be much more transparent about the geofence warrants it receives
  • Provide all affected users with notice, and
  • Give users meaningful choice and control over their private data

As explained below, these are the minimum steps Google must take to show that it is committed to its users’ privacy and the Fourth Amendment’s protections against general warrants.

First: Refuse to Comply with Geofence Warrants

EFF calls on Google to stop complying with the geofence warrants it receives. As it stands now, Google appears to have set up an internal system that streamlines, systematizes, and encourages law enforcement’s use of geofence warrants. Google’s practice of complying with geofence warrants despite their unconstitutionality is inconsistent with its stated promise to protect the privacy of its users by “keeping your information safe, treating it responsibly, and putting you in control.” As recently as October, Google’s parent company’s CEO, Sundar Pichai, said that “[p]rivacy is one of the most important areas we invest in as a company,” and in the past, Google has even gone to court to protect its users’ sensitive data from overreaching government legal process. However, Google’s compliance with geofence warrants is incongruent with these platitudes and the company’s past actions.

To live up to its promises, Google should commit to either refusing to comply with these unlawful warrants or to challenging them in court. By refusing to comply, Google would put the burden on law enforcement to demonstrate the legality of its warrant in court. Other companies, and even Google itself, have done this in the past. Google should not defer to law enforcement’s contention that geofence warrants are constitutional, especially given law enforcement’s well-documented history of trying novel surveillance and legal theories that courts later rule to be unconstitutional. And to the extent Google has refused to comply with geofence warrants, it should say so publicly.

Google’s ongoing cooperation is all the more unacceptable given that other companies that collect similar location data from their users, including Microsoft and Garmin, have publicly stated that they would not comply with geofence warrants.

Second: Be Meaningfully Transparent

Even if Google were to stop complying with geofence warrants today, it still must be much more transparent about geofence warrants it has received in the past. Google must break out information and provide further details about geofence warrants in its biannual Transparency Reports.

Google’s Transparency Reports currently document, among other things, the types and volume of law enforcement requests for user data the company receives, but they do not, as of now, break out information about geofence warrants or provide further details about them. With no detailed reporting from Google about the geofence warrants it has received, the public is left to learn about them via leaks to reporters or by combing through court filings.

Here are a few specific ways Google can be more transparent: 

Immediate Transparency Reforms


Google should disclose the following information about all geofence warrants it has received over the last five years and commit to continue doing so moving forward:

  • The amount of geofence warrants Google has received to date, broken out in 6-month increments.
  • The percentage of requests with which it has complied.
  • How many device IDs Google has disclosed per warrant.
  • The duration and geographic area that each geofence warrant covered.

Google should also resist nondisclosure orders and litigate to ensure, if imposed, that the government has made the appropriate showing required by law. If Google is subject to such an order, or the related docket is sealed (prohibiting the company from disclosing the fact it has received some geofence warrants or from providing other details), Google should move to end those orders and to unseal those dockets so it can make details about them public as early as allowable by law.

Long-term Transparency Reforms


Google should also support and seek to provide basic details about court cases and docket numbers for orders authorizing each geofence warrant and docket numbers for any related criminal prosecutions Google is aware of as a result of the geofence warrants. At minimum, Google should disclose details on the agencies seeking geofence warrants, broken down by each federal agency, state-level agencies, and local law enforcement.

Third: Give All Affected Users Notice

Google must start telling its users when their information is caught up in a geofence warrant—even if that information is de-identified. This notice to affected users should state explicitly what information Google produced, in what format, which agency requested it, which court authorized the warrant, and whether Google provided identifying information. Notice to users here is critical: if people aren’t aware of how they are being affected by these warrants, there can’t be meaningful public debate about them.

To the extent the law requires Google to delay notice or not disclose the existence of the warrant, Google should challenge such restrictions so as to only comply with valid ones, and it should provide users with notice as soon as possible.

It does not appear that Google gives notice to every user whose data is requested by law enforcement. Some affected users have said that Google notified them that law enforcement accessed their account via a geofence warrant. But in some of the cases EFF has followed, it appears that Google has not always notified affected users who it identifies in response to these warrants, with no public explanation from Google. Google’s policies state that it gives notice to users before disclosing information, but more clarity is warranted here. Google should publicly state whether its policy is being applied to all users’ information subject to geofence warrants, or only those who they identify to law enforcement.

Fourth: Minimize Data Collection and Give Users Meaningful Choice

Many people do not know, much less understand, how and when Google collects and stores location data. Google must do a better job of explaining its policies and practices to users, not processing user data absent opt-in consent, minimizing the amount of data it collects, deleting retained data users no longer need, and giving users the ability to easily delete their data. 

Well before law enforcement ever comes calling, Google must first ensure it does not collect its users’ location data before obtaining meaningful consent from them. This consent should establish a fair way for users to opt into data collection, as click-through agreements which apply to dozens of services, data types, or uses at once are insufficient. As one judge in a case involving Facebook put it, the logic that merely clicking “I agree” indicates true consent requires everyone “to pretend” that users read every word of these policies “before clicking their acceptance, even though we all know that virtually none of them did.”

Google should also explain exactly what location data it collects from users, when that collection occurs, what purpose it is used for, and how long Google retains that data. This should be clear and understandable, not buried in dense privacy policies or terms of service.

Google should also only be collecting, retaining, and using its customers’ location data for a specific purpose, such as to provide directions on Google Maps or to measure road traffic congestion. Data must not be collected or used for a different purpose, such as for targeted advertising, unless users separately opt in to such use. Beyond notice and consent, Google must minimize its processing of user data, that is, only process user data as reasonably necessary to give users what they asked for. For example, user data should be deleted when it is no longer needed for the specific purpose for which it was initially collected, unless the user specifically requests that the data be saved.

Although Google allows users to manually delete their location data and to set automated deletion schedules, Google should confirm that these tools are not illusory. Recent enforcement actions by state attorneys allege that users cannot fully delete their data, much less fully opt out of having their location data collected at all.

*  *  *

 Google holds a tremendous amount of power over law enforcement’s ability to use geofence warrants. Instead of keeping quiet about them and waiting for defendants in criminal cases to challenge them in court, Google needs to stand up for its users when it comes to revealing their sensitive data to law enforcement.

If You Build It, They Will Come: Apple Has Opened the Backdoor to Increased Surveillance and Censorship Around the World

Wed, 08/11/2021 - 6:24pm

Apple’s new program for scanning images sent on iMessage steps back from the company’s prior support for the privacy and security of encrypted messages. The program, initially limited to the United States, narrows the understanding of end-to-end encryption to allow for client-side scanning. While Apple aims at the scourge of child exploitation and abuse, the company has created an infrastructure that is all too easy to redirect to greater surveillance and censorship. The program will undermine Apple’s defense that it can’t comply with the broader demands.

For years, countries around the world have asked for access to and control over encrypted messages, asking technology companies to “nerd harder” when faced with the pushback that access to messages in the clear was incompatible with strong encryption. The Apple child safety message scanning program is currently being rolled out only in the United States. 

The United States has not been shy about seeking access to encrypted communications, pressuring the companies to make it easier to obtain data with warrants and to voluntarily turn over data. However, the U.S. faces serious constitutional issues if it wanted to pass a law that required warrantless screening and reporting of content. Even if conducted by a private party, a search ordered by the government is subject to the Fourth Amendment’s protections. Any “warrant” issued for suspicionless mass surveillance would be an unconstitutional general warrant. As the Ninth Circuit Court of Appeals has explained, "Search warrants . . . are fundamentally offensive to the underlying principles of the Fourth Amendment when they are so bountiful and expansive in their language that they constitute a virtual, all-encompassing dragnet[.]" With this new program, Apple has failed to hold a strong policy line against U.S. laws undermining encryption, but there remains a constitutional backstop to some of the worst excesses. But U.S constitutional protection may not necessarily be replicated in every country.

Apple is a global company, with phones and computers in use all over the world, and many governments pressure that comes along with that. Apple has promised it will refuse government “demands to build and deploy government-mandated changes that degrade the privacy of users.” It is good that Apple says it will not, but this is not nearly as strong a protection as saying it cannot, which could not honestly be said about any system of this type. Moreover, if it implements this change, Apple will need to not just fight for privacy, but win in legislatures and courts around the world. To keep its promise, Apple will have to resist the pressure to expand the iMessage scanning program to new countries, to scan for new types of content and to report outside parent-child relationships.  

It is no surprise that authoritarian countries demand companies provide access and control to encrypted messages, often the last best hope for dissidents to organize and communicate. For example, Citizen Lab’s research shows that—right now—China’s unencrypted WeChat service already surveils images and files shared by users, and uses them to train censorship algorithms. “When a message is sent from one WeChat user to another, it passes through a server managed by Tencent (WeChat’s parent company) that detects if the message includes blacklisted keywords before a message is sent to the recipient.” As the Stanford Internet Observatory’s Riana Pfefferkorn explains, this type of technology is a roadmap showing “how a client-side scanning system originally built only for CSAM [Child Sexual Abuse Material] could and would be suborned for censorship and political persecution.” As Apple has found, China, with the world’s biggest market, can be hard to refuse. Other countries are not shy about applying extreme pressure on companies, including arresting local employees of the tech companies. 

But many times potent pressure to access encrypted data also comes from democratic countries that strive to uphold the rule of law, at least at first. If companies fail to hold the line in such countries, the changes made to undermine encryption can easily be replicated by countries with weaker democratic institutions and poor human rights records—often using similar legal language, but with different ideas about public order and state security, as well as what constitutes impermissible content, from obscenity to indecency to political speech. This is very dangerous. These countries, with poor human rights records, will nevertheless contend that they are no different. They are sovereign nations, and will see their public-order needs as equally urgent. They will contend that if Apple is providing access to any nation-state under that state’s local laws, Apple must also provide access to other countries, at least, under the same terms.

'Five Eyes' Countries Will Seek to Scan Messages 

For example, the Five Eyes—an alliance of the intelligence services of Canada, New Zealand, Australia, the United Kingdom, and the United States—warned in 2018 that they will “pursue technological, enforcement, legislative or other measures to achieve lawful access solutions” if the companies didn’t voluntarily provide access to encrypted messages. More recently, the Five Eyes have pivoted from terrorism to the prevention of CSAM as the justification, but the demand for unencrypted access remains the same, and the Five Eyes are unlikely to be satisfied without changes to assist terrorism and criminal investigations too.

The United Kingdom’s Investigatory Powers Act, following through on the Five Eyes’ threat, allows their Secretary of State to issue “technical capacity notices,” which oblige telecommunications operators to make the technical ability of “providing assistance in giving effect to an interception warrant, equipment interference warrant, or a warrant or authorisation for obtaining communications data.” As the UK Parliament considered the IPA, we warned that a “company could be compelled to distribute an update in order to facilitate the execution of an equipment interference warrant, and ordered to refrain from notifying their customers.”

Under the IPA, the Secretary of State must consider “the technical feasibility of complying with the notice.” But the infrastructure needed to roll out Apple’s proposed changes makes it harder to say that additional surveillance is not technically feasible. With Apple’s new program, we worry that the UK might try to compel an update that would expand the current functionality of the iMessage scanning program, with different algorithmic targets and wider reporting. As the iMessage “communication safety” feature is entirely Apple’s own invention, Apple can all too easily change its own criteria for what will be flagged for reporting. Apple may receive an order to adopt its hash matching program for iPhoto into the message pre-screening. Likewise, the criteria for which accounts will apply this scanning, and where positive hits get reported, are wholly within Apple’s control. 

Australia followed suit with its Assistance and Access Act, which likewise allows for requirements to provide technical assistance and capabilities, with the disturbing potential to undermine encryption. While the Act contains some safeguards, a coalition of civil society organizations, tech companies, and trade associations, including EFF and—wait for it—Apple, explained that they were insufficient. 

Indeed, in Apple’s own submission to the Australian government, Apple warned “the government may seek to compel providers to install or test software or equipment, facilitate access to customer equipment, turn over source code, remove forms of electronic protection, modify characteristics of a service, or substitute a service, among other things.” If only Apple would remember that these very techniques could also be used in an attempt to mandate or change the scope of Apple’s scanning program. 

While Canada has yet to adopt an explicit requirement for plain text access, the Canadian government is actively pursuing filtering obligations for various online platforms, which raise the spectre of a more aggressive set of obligations targeting private messaging applications. 

Censorship Regimes Are In Place And Ready to Go

For the Five Eyes, the ask is mostly for surveillance capabilities, but India and Indonesia are already down the slippery slope to content censorship. The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”), in effect earlier this year, directly imposes dangerous requirements for platforms to pre-screen content. Rule 4(4) compels content filtering, requiring that providers “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. 

India’s defense of the 2021 rules, written in response to the criticism from three UN Special Rapporteurs, was to highlight the very real dangers to children, and skips over the much broader mandate of the scanning and censorship rules. The 2021 Rules impose proactive and automatic enforcement of its content takedown provisions, requiring the proactive blocking of material previously held to be forbidden under Indian law. These laws broadly include those protecting “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality.” This is no hypothetical slippery slope—it’s not hard to see how this language could be dangerous to freedom of expression and political dissent. Indeed, India’s track record on its Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media, highlight this danger.

It would be no surprise if India claimed that Apple’s scanning program was a great start towards compliance, with a few more tweaks needed to address the 2021 Rules’ wider mandate. Apple has promised to protest any expansion, and could argue in court, as WhatsApp and others have, that the 2021 Rules should be struck down, or that Apple does not fit the definition of a social media intermediary regulated under these 2021 Rules. But the Indian rules illustrate both the governmental desire and the legal backing for pre-screening encrypted content, and Apple’s changes makes it all the easier to slip into this dystopia.

This is, unfortunately, an ever-growing trend. Indonesia, too, has adopted Ministerial Regulation MR5 to require service providers (including “instant messaging” providers) to “ensure” that their system “does not contain any prohibited [information]; and [...] does not facilitate the dissemination of prohibited [information]”. MR5 defines prohibited information as anything that violates any provision of Indonesia’s laws and regulations, or creates “community anxiety” or “disturbance in public order.” MR5 also imposes disproportionate sanctions, including a general blocking of systems for those who fail to ensure there is no prohibited content and information in their systems. Indonesia may also see the iMessage scanning functionality as a tool for compliance with Regulation MR5, and pressure Apple to adopt a broader and more invasive version in their country.

Pressure Will Grow

The pressure to expand Apple’s program to more countries and more types of content will only continue. In fall of 2020, in the European Union, a series of leaked documents from the European Commission foreshadowed an anti-encryption law to the European Parliament, perhaps this year. Fortunately, there is a backstop in the EU. Under the e-commerce directive, EU Member States are not allowed to impose a general obligation to monitor the information that users transmit or store, as stated in the Article 15 of the e-Commerce Directive (2000/31/EC). Indeed, the Court of Justice of the European Union (CJEU) has stated explicitly that intermediaries may not be obliged to monitor their services in a general manner in order to detect and prevent illegal activity of their users. Such an obligation will be incompatible with fairness and proportionality. Despite this, in a leaked internal document published by Politico, the European Commission committed itself to an action plan for mandatory detection of CSAM by relevant online service providers (expected in December 2021) that pointed to client-side scanning as the solution, which can potentially apply to secure private messaging apps, and seizing upon the notion that it preserves the protection of end-to-end encryption.

For governmental policymakers who have been urging companies to nerd harder, wordsmithing harder is just as good. The end result of access to unencrypted communication is the goal, and if that can be achieved in a way that arguably leaves a more narrowly defined end-to-end encryption in place, all the better for them.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, the adoption of the iPhoto hash matching to iMessage, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. Apple has a fully built system just waiting for external pressure to make the necessary changes. China and doubtless other countries already have hashes and content classifiers to identify messages impermissible under their laws, even if they are protected by international human rights law. The abuse cases are easy to imagine: governments that outlaw homosexuality might require a classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand a classifier able to spot popular satirical images or protest flyers.

Now that Apple has built it, they will come. With good intentions, Apple has ​​paved the road to mandated security weakness around the world, enabling and reinforcing the arguments that, should the intentions be good enough, scanning through your personal life and private communications is acceptable. We urge Apple to reconsider and return to the mantra Apple so memorably emblazoned on a billboard at 2019’s CES conference in Las Vegas: What happens on your iPhone, stays on your iPhone.

O (no!) Canada: Fast-moving proposal creates filtering, blocking and reporting rules—and speech police to enforce them

Tue, 08/10/2021 - 4:44pm

Policymakers around the world are contemplating a wide variety of proposals to address “harmful” online expression. Many of these proposals are dangerously misguided and will inevitably result in the censorship of all kinds of lawful and valuable expression. And one of the most dangerous proposals may be adopted in Canada. How bad is it? As Stanford’s Daphne Keller observes, “It's like a list of the worst ideas around the world.” She’s right.

These ideas include:

  • broad “harmful content” categories that explicitly include speech that is legal but potentially upsetting or hurtful
  • a hair-trigger 24-hour takedown requirement (far too short for reasonable consideration of context and nuance)
  • an effective filtering requirement (the proposal says service providers must take reasonable measures which “may include” filters, but, in practice, compliance will require them)
  • penalties of up to 3 percent of the providers' gross revenues or up to 10 million dollars, whichever is higher
  • mandatory reporting of potentially harmful content (and the users who post it) to law enforcement and national security agencies
  • website blocking (platforms deemed to have violated some of the proposal’s requirements too often might be blocked completely by Canadian ISPs)
  • onerous data-retention obligations

All of this is terrible, but perhaps the most terrifying aspect of the proposal is that it would create a new internet speech czar with broad powers to ensure compliance, and continuously redefine what compliance means.

These powers include the right to enter and inspect any place (other than a home):

“in which they believe on reasonable grounds there is any document, information or any other thing, including computer algorithms and software, relevant to the purpose of verifying compliance and preventing non-compliance  . . . and examine the document, information or thing or remove it for examination or reproduction”; to hold hearing in response to public complaints, and, “do any act or thing . . . necessary to ensure compliance.”

But don’t worry—ISPs can avoid having their doors kicked in by coordinating with the speech police, who will give them "advice" on their content moderation practices. Follow that advice and you may be safe. Ignore it and be prepared to forfeit your computers and millions of dollars.

The potential harms here are vast, and they'll only grow because so much of the regulation is left open. For example, platforms will likely be forced to rely on automated filters to assess and discover "harmful" content on their platforms, and users caught up in these sweeps could end up on file with the local cops—or with Canada’s national security agencies, thanks to the proposed reporting obligations.

Private communications are nominally excluded, but that is cold comfort—the Canadian government may decide, as contemplated by other countries, that chat groups of various sizes are not ‘private.’ If so, end-to-end encryption will be under further threat, with platforms pressured to undermine the security and integrity of their services in order to fulfill their filtering obligations. And regulators will likely demand that Apple expand its controversial new image assessment tool to address the broad "harmful content" categories covered by the proposal.

In the United States and elsewhere, we have seen how rules like this hurt marginalized groups, both online and offline. Faced with expansive and vague moderation obligations, little time for analysis, and major legal consequences if they guess wrong, companies inevitably overcensor—and users pay the price.

For example, a U.S. law intended to penalize sites that hosted speech related to child sexual abuse and trafficking led large and small internet platforms to censor broad swaths of speech with adult content. The consequences of this censorship have been devastating for marginalized communities and groups that serve them, especially organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. For example, the law prevented sex workers from organizing and utilizing tools that have kept them safe. Taking away online forums, client-screening capabilities, "bad date" lists, and other intra-community safety tips means putting more workers on the street, at higher risk, which leads to increased violence and trafficking. The impact was particularly harmful for trans women of color, who are disproportionately affected by this violence.

Indeed, even “voluntary” content moderation rules are dangerous. For example, policies against hate speech have shut down online conversations about racism and harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of aboriginal women in Australia, Dakota pipeline protestors and many others, are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are being flagged for takedown.

The powerless struggle to be heard in the first place; platform censorship ensures they won’t be able to take full advantage of online spaces either.

Professor Michael Geist, who has been doing crucial work covering this and other bad internet proposals coming out of Canada, notes that the government has shown little interest in hearing what Canadians think of the plans. Nonetheless, the government says it is taking comments. We hope Canadians will flood the government with responses.

But it's not just Canadians who need to worry about this. Dangerous proposals in one country have a way of inspiring other nations' policymakers to follow suit—especially if those bad ideas come from widely respected democratic countries like Canada.

Indeed, it seems like the people who drafted this policy themselves looked to other countries for inspiration—but ignored the criticism those other policies have received from human rights defenders, the UN, and a wide range of civil society groups. For example, the content monitoring obligations echo proposals in India and the UK that have been widely criticized by civil society, not to mention three UN Rapporteurs. The Canadian proposal seeks to import the worst aspects of Germany’s Network Enforcement Act, ("NetzDG"), which deputizes private companies to police the internet, following a rushed timeline that precludes any hope of a balanced legal analysis, leading to takedowns of innocuous posts and satirical content. The law has been heavily criticized in Germany and abroad, and experts say it conflicts with the EU’s central internet regulation, the E-Commerce Directive. Canada's proposal also bears a striking similarity to France's "hate speech" law, which was struck down as unconstitutional.

These regulations, like Canada’s, depart significantly from the more sensible, if still imperfect, approach being contemplated in the European Union’s Digital Services Act (DSA). The proposal sets limits on content removal and allows users to challenge censorship decisions. Although it contains some worrying elements that could result in content over-blocking, the DSA doesn’t follow the footsteps of other disastrous European internet legislation that has endangered freedom of expression by forcing platforms to monitor and censor what users say or upload online.

Canada also appears to have lost sight of its trade obligations. In 2018, Canada, the United States and Mexico finalized the USMCA agreement, an updated version of NAFTA. Article 19.17 of the USMCA prohibits treating platforms as the originators of content when determining liability for information harms. But this proposal does precisely that—in multiple ways, a platforms’ legal risk depends on whether it properly identifies and removes harmful content it had no part in creating.

Ironically, perhaps, the proposal would also further entrench the power of U.S. tech giants over social media, because they are the only ones who can afford to comply with these complex and draconian obligations.

Finally, the regulatory scheme would depart from settled human rights norms. Article 19 of the International Covenant on Civil and Political Rights allows states to limit freedom of expression under select circumstances, provided they comply with a three-step test: be prescribed by law; have legitimate aim; and be necessary and proportionate. Limitations must also be interpreted and applied narrowly.

Canada’s proposal falls far short of meeting these criteria. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms. It’s profoundly disappointing to see Canada force companies to violate human rights law instead.

This law is dangerous to internet speech, privacy, security, and competition. We hope our friends in the Great White North agree, and raise their voices to send it to the scrap heap of bad internet ideas from around the globe.

​​What to Do When Schools Use Canvas or Blackboard Logs to Allege Cheating

Mon, 08/09/2021 - 4:12pm

Over the past few months, students from all over the country have reached out to EFF and other advocacy organizations because their schools—including teachers and administrators—have made flimsy claims about cheating based on digital logs from online learning platforms that don’t hold up to scrutiny. Such claims were made against over a dozen students at the Dartmouth Geisel School of Medicine, which EFF and the Foundation for Individual Rights in Education (FIRE) criticized for being a misuse, and misunderstanding, of the online learning platform technology. Dartmouth ended that investigation and dismissed all allegations after a media firestorm. If your school is making similar accusations against students, here’s what we recommend.

Students Deserve the Evidence Against Them

Online learning platforms provide a variety of digital logs to teachers and administrators, but those same logs are not always made available to the accused students. This is unfair. True due process for cheating allegations requires that students see the evidence against them, whether that’s videos from proctoring tools, or logs from test-taking or learning management platforms like Canvas or Blackboard.

Schools should use technology to serve students, rather than using it as a tool to discipline them.

It can be difficult to know what logs to ask for, because different online learning platforms call this data by different names. In the case of Canvas, there may be multiple types of logs, depending on whether a student used the platform to take a test or access course materials while studying for it. 

Bottom line: students should be given copies of any logs that are being cited as evidence of cheating, and any logs that may be exculpatory. It’s all too easy for schools to cherry-pick logs that only indicate possible misconduct. With course material access logs, for example, schools often only share (if they share at all) logs that indicate a student’s device accessed material that is relevant to the subject of the test, while dismissing logs that show access of materials that are less relevant, thus hiding evidence that the access was the result of an automated link between the device and platform. Any allegation should start with the student being shown everything that the administration has access to—and we’re calling on learning platforms like Canvas and Blackboard to give students direct access, too.

A sample log from Blackboard

Digital Logs Are Unreliable Evidence of Cheating

It’s important for both students and school officials to understand why digital logs are unreliable evidence of cheating. Course material access logs, for example, can only show that a page, document, or file was accessed by a device—not necessarily why or by whom (if anyone). Much like a cell phone pinging a tower, logs may show files being pinged by a device in short time periods, suggesting a non-deliberate process, as was the case with the access logs we saw from Dartmouth medical students. It can be impossible to know for sure from the logs alone if a student intentionally accessed any of the files, or if the pings happened due to delayed loading, or automatic refresh processes that are commonplace in most websites and online services. 

Canvas, for its part, has stated multiple times that both test-taking logs and course material access logs are not reliable. According to the company, test-taking logs, which purport to show student activity during a Canvas-administered test, “are not intended to validate academic integrity or identify cheating for a quiz.”  Similarly, logs that purport to show student access to class documents uploaded to Canvas, are also not accurate; as the company explains: “This data is meant to be used for rollups and analysis in the aggregate, not in isolation for auditing or other high-stakes analysis involving examining single users or small samples.” 

Blackboard has so far not made any public statements on the accuracy of its logs, but when contacted, the company said they are working on a public disclaimer to avoid any misconceptions on the accuracy and use of this type of data. The company was clear that logs should not be used to allege cheating: “Blackboard does not recommend using this data alone to detect student misconduct, and further, when an inquiry is made by a client related to this type of investigation, Blackboard consistently advises on the possible inaccurate conclusions that can be drawn from the use of such data.” Both Canvas and Blackboard should be more transparent with their users about the accuracy of their logs. For now, it's imperative that educators and administrators understand the unreliability of these logs, which both companies have admitted, albeit not as openly as we would like. 

Collaboration Between Students Can Be Key

If one student is being charged with cheating based on digital logs, it’s likely others are as well, so don’t be afraid of rallying fellow students. At Dartmouth medical school, collective activism helped individual students push back against the cheating allegations, ultimately forcing the administration to withdraw them. Dartmouth students accused of cheating worked together to uncover flaws in the investigation, then contacted advocacy organizations and the press, and held on-campus protests. 

Sympathetic teachers and administrators may also be valuable resources when it comes to pointing out unreliable evidence and due process problems. It may also be helpful to reach out to a technologist where possible, given the technical expertise required to examine digital data. Even a school computer club may be able to offer assistance. 

Surveillance Is Not the Solution

If a school is unable to use digital logs to prove cheating, the administration may consider adding even more invasive measures, like proctoring tools. But mandating more surveillance of students is not the answer. Schools should use technology to serve students, rather than using it as a tool to discipline them.

Disciplinary technologies that start by assuming guilt, rather than promoting trust, create a dangerous environment for students. Many schools now monitor online activity, like social media posts. They track what websites students visit. They require students to use technology on their laptops that collects and shares private data with third-party companies, while other schools have implemented flawed facial recognition technology. And many, many schools have on-campus cameras, more and more of which feed directly to police. 

But these technologies are often dangerously biased, and profoundly ineffective. They rob students of the space to experiment and learn without being monitored at every turn. And they teach young people to expect and allow surveillance, particularly when a power imbalance makes it difficult to fight back, whether that monitoring is by a school, an employer, a romantic partner, or the government. This problem is not just a slippery slope—it’s a cliff, and we must not push an entire generation off of it. Privacy is a human right, and schools should be foundational in a young person’s understanding of what it means to live in a society that respects and protects human rights.

EFF’s Statement on the Use of E-Learning Platform Logs in Misconduct Allegations

If necessary, you may wish to forward to your teachers or administrators this blog post on the problems with using digital logs as evidence of academic misconduct. If course material access logs, specifically, are being cited against you, you may forward EFF’s statement below. While we cannot assist every student individually, we hope this will help guide schools away from improperly using digital logs as evidence of cheating:

As a nonprofit dedicated to defending digital privacy, free speech, and innovation—including in the classroom, our independent research and investigation has determined that there are several scenarios where course material access logs of e-learning platforms can be generated without any student interaction, for example, due to delayed loading on a device or due to automatic refreshing of webpages. Instructure, the company behind the e-learning platform Canvas, has publicly stated that their logs (both course material access logs and test-taking logs) are not accurate and should not be used for academic misconduct investigations. The New York Times, in their own investigation into Canvas access logs, found this to be true as well. Blackboard, as well, has stated that inaccurate conclusions can be drawn from the use of their logs. Any administrator or teacher who interprets digital logs as evidence that a student was cheating may very likely be turning false positives into accusations of academic misconduct.

Educators who seek out technical evidence of students cheating, whether those are through logs, proctoring apps, or other computer-generated techniques, must also seek out technical expertise, follow due process, and offer concrete routes of appeal to students. We urge universities to protect the due process rights of all students facing misconduct charges by ensuring basic procedural safeguards are in place to guarantee fairness. These include, among other things, access to the full suite of evidence—including evidence that might tend to exculpate the student—and sufficient technical guidance for factfinders to interpret the evidence marshaled against the student. Students should also have time to meaningfully prepare for any hearing. These safeguards are necessary to ensure a just and trustworthy outcome is reached.

The Company Behind Online Learning Platform Canvas Should Commit to Transparency, Due Process for Students

Mon, 08/09/2021 - 4:05pm

Canvas is an online learning platform created by the Utah-based education technology company Instructure. In the past year, the platform has also been turned into a disciplinary technology, as more and more schools have come to rely on Canvas to drive allegations of cheating—despite student protests and technical advice. So far the company has shied away from the controversy. But it’s time for Instructure to publicly and unequivocally tell schools: Canvas does not provide reliable evidence of academic misconduct.

Schools use Canvas in two ways, both of which result in digital logs being generated by the software. First, schools can use Canvas to administer tests, and the platform provides logs of the test-taking activity. Second, schools can use Canvas to host learning materials such as course lectures and notes, and the platform provides logs of when specific course material was accessed by a student’s device. Neither of these logs are accurate for disciplinary use, and Canvas knows this. 

Since January, the Canvas instructor guide has explicitly stated: “Quiz logs should not be used to validate academic integrity or identify occurrences of cheating.” In February, an employee of Instructure commented in a community forum that “weirdness in Canvas Quiz Logs may appear because of various end-user [student] activities or because Canvas prioritizes saving student's quiz data ahead of logging events. Also, there is a known issue with logging of ‘multiple answer’ questions” (emphasis original). The employee concluded that “unfortunately, I can’t definitively predict what happened on the users’ end in that particular case.” 

And as we have previously written, along with the New York Times, course material access logs also do not accurately reflect student activity—they could either indicate that a student was actively engaging with the course material, or that a student’s device was passively logged in to the website, but the student was not actively accessing the course material. Canvas’ API documentation states that access logs should not be used for “high-stakes analysis” of student behavior.

Despite the admitted and inherently unreliable nature of Canvas logs, and an outcry by accused students and digital rights organizations, schools continue to rely on Canvas logs to determine cheating—and Instructure continues to act as if nothing is wrong. Meanwhile, students’ educational careers are being harmed by these flimsy accusations.

Instructure Must Right This Wrong

Last year, the administration of James Madison University lowered the grades of students who had been flagged as “inactive” during an exam according to Canvas test-taking logs. Students there spoke out to criticize the validity of the logs. Earlier this year, over a dozen medical students at Dartmouth’s Geisel medical school were accused of cheating after a dragnet investigation of their Canvas course material access logs. Dartmouth’s administration eventually retracted the allegations, but not before students spent months fighting the allegations, while fearing what they could mean for their futures. And in the past few months, EFF has heard from other students around the country who have been accused of cheating by their schools, based solely or primarily on Canvas logs.

Cheating accusations can result in lowered grades, a black mark on student transcripts, suspension, and even expulsion. Despite the serious consequences students face, they often have very limited recourse. Disturbingly, Canvas provides logs to administrators and teachers, but accused students have been unable to see those same logs, either via the platform itself or from school officials. 

Students deserve better. Schools should accept that Canvas logs cannot replace concrete, dispositive evidence of cheating. If you are a student who has been affected by the misuse of Canvas logs, we’ve written a guide for educating your administrators and teachers on their inaccuracy.

Instructure, for its part, must do better. Admitting to the unreliability of Canvas logs in obscure webpages is not enough. We reached out privately to Instructure, with no response. Now we are publicly calling on the company to issue a clear, public announcement that Canvas logs are unreliable and should not be used to fuel cheating accusations. The company should also allow students to access the same logs their schools are increasingly using to accuse them of academic misconduct—which is important because, when viewed in their entirety, Canvas logs often don’t rationally reveal activity consistent with cheating 

Instructure has a responsibility to prevent schools from misusing their products. Taking action now would show the company’s commitment to the integrity of the academic process, and would give students a chance to face their accusers on the same footing, rather than resigning to an unjust and opaque process.

Tech Rights Are Workers' Rights: Doordash Edition

Fri, 08/06/2021 - 6:44pm

Doordash workers are embroiled in a bitter labor dispute with the company: at issue, the tips that “Dashers” depend on to make the difference between a living wage and the poorhouse. Doordash has a long history of abusing its workers’ tips; including a particularly ugly case brought by the Washington, D.C. Attorney General,  only settled when Doordash paid back millions in stolen Dashers’ tips. 

Doordash maintains that its workers are “independent contractors” who can pick and choose among the delivery jobs available from moment to moment, based on the expected compensation. Given the outsized role that tips play in Dashers’ compensation, you’d think that the company would tell the workers the size of the tip that its customers had offered on each job.

But that’s not the case. Though customers input their tips when they place their orders, the amount is hidden from drivers until they complete the job - turning each dispatch into a casino game where the dealer knows the payout in advance but the worker only finds out whether they’ve made or lost money on a delivery after it’s done.

Dashers aren’t stupid - nor are they technologically unsophisticated. Dashers made heavy use of Para, an app that inspected Doordash’s dispatch orders and let drivers preview the tips on offer before they took the job. Para allowed Dashers to act as truly independent agents who were entitled to the same information as the giant corporation that relied on their labor.

But what’s good for Dashers wasn’t good for Doordash: the company wants to fulfill orders, even if doing so means that a driver spends more on gas than they make in commissions. Hiding tip amounts from drivers allowed the company to keep drivers in the dark about which runs they should make and which ones they should decline.

That’s why Doordash changed its data-model to prevent Para from showing drivers tips.  And rather than come clean about its goal of keeping drivers from knowing how much they would be paid, it made  deceptive “privacy and data security” claims. Among its claims: that Para violated its terms of service by “scraping.”

Scraping is an old and honorable tool in the technologist’s toolkit, a cornerstone of Competitive Compatibility (AKA comcom, or adversarial interoperability). It allows developers to make new or improved technologies that connect to existing ones, with or without permission from the company that made the old system.  Comcom lets users and toolsmiths collaborate to seize the means of computation, resisting disciplinary technologies like the bossware that is gradually imposing Doordash-style technological controls on all kinds of workers. It’s possible to do bad things with scraping - to commit privacy violations and worse - but there’s nothing intrinsically sinister about scraping.

Doordash loves comcom, when they’re the ones deploying it. The company routinely creates listings for restaurants that have never agreed to use it for delivery services, using “search engine optimization” and anticompetitive, loss-making pricing to interpose itself between struggling restaurateurs and their diners. 

Dashers also have a long history of subverting the technological controls that make their working lives so hard. But despite Doordash’s celebration of “disruption,” it has zero tolerance for apps that turn the tables on technological control. So Doordash stopped providing the tip information in the stream of information, effectively eliminating Para’s ability to show crucial tip information to Dashers.

Dashers are not giving up. When their technology stopped working, they switched to coordinated labor action. At the top of their demands: the  right to know what they’re going to be paid before they do a job - a perfectly reasonable thing to demand. The fact that Doordash intentionally designed an app to hide that information, and then cut off an app that tried to provide it, is ugly. Doordash should just tell Dashers the truth.   

And if they won’t, Dashers should be allowed to continue to develop and run programs that extract that information from the Doordash app, even if that involves decrypting a message or doing something else that the company doesn’t like. Reverse-engineering a program and modifying it can be fully compatible with data-security and privacy

Don’t get us wrong, the digital world needs strong legal privacy protections, which is why we support a strong federal privacy law with a private right of action. That way, your privacy would be protected whether or not a company decided to take it seriously.  But it’s hard to see how giving Dashers good information about what they will be paid is a privacy problem. And we all need to be on alert for companies that use “privacy-washing” to defend business decisions that hurt workers.  

Putting Doordash in charge of the information Dashers need would be a bad idea even if the company had a great privacy track-record (the company does not have a great privacy track-record!). It’s just too easy to use privacy as an all-purpose excuse for whatever restrictions the company wants to put on its technology.

Doordash didn’t invent this kind of spin. It is following the example set by a parade of large companies that break interoperability to improve their own bottom line at others’ expense, whether that’s HP claiming that it blocks third-party ink to protect you from blurry printouts, or car makers saying that they only want to shut down independent mechanics to defend you from murdering stalkers, or Facebook saying it only threatened accountability journalists as part of its mission to defend our privacy.

In a world where we use devices and networks to do everything from working to learning to being in community, the right to decide how those devices and networks work is fundamental. As the Dashers have shown us, when an app is your boss, you need a better app.

Why Companies Keep Folding to Copyright Pressure, Even If They Shouldn’t

Fri, 08/06/2021 - 2:59pm

The giant record labels, their association, and their lobbyists have succeeded in getting a number of members of the U.S. House of Representatives to pressure Twitter to pay money it does not owe, to labels who have no claim to it, against the interests of its users. This is a playbook we’ve seen before, and it seems to work almost every time. For once, let us hope a company sees this extortion attempt for what it is and stands up to it.

Here is the deal. Online platforms that host user content are not liable for copyright infringement done by those users so long as they fulfill the obligations laid out in the Digital Millennium Copyright Act (DMCA). One of those obligations is to give rightsholders an unprecedented ability to have speech removed from the internet, on demand, with a simple notice sent to a platform identifying the offending content. Another is that companies must have some policy to terminate the accounts of “repeat infringers.”

Not content with being able to remove content without a court order, the giant companies that hold the most profitable rights want platforms to do more than the law requires. They do not care that their demands result in other people’s speech being suppressed. Mostly, they want two things: automated filters, and to be paid. In fact, the letter sent to Twitter by those members of Congress asks Twitter to add “content protection technology”—for free—and heavily implies that the just course is for Twitter to enter into expensive licensing agreements with the labels.

Make no mistake, artists deserve to be paid for their work. However, the complaints that the RIAA and record labels make about platforms are less about what individual artists make, and more about labels’ control. In 2020, according to the RIAA, revenues rose almost 10% to $12.2 billion in the United States. And Twitter, whatever else it is, is not where people go for music.

But the reason the RIAA, the labels, and their lobbyists have gone with this tactic is that, up until now, it has worked. Google set the worst precedent possible in this regard. Trying to avoid a fight with major rightsholders, Google voluntarily created Content ID. Content ID is an automated filter that scans uploads to see if any part—even just a few seconds—of the upload matches the copyrighted material in its database. A match can result in either a user’s video being blocked, or monetized for the claiming rightsholder. Ninety percent of Content ID partners choose to automatically monetize a match—that is, claim the advertising revenue on a creator’s video for themselves—and 95 percent of Content ID matches made to music are monetized in some form. That gives small, independent YouTube creators only a few options for how to make a living. Creators can dispute matches and hope to win, sacrificing revenue while they do and risking the loss of their channel. Fewer than one percent of Content ID matches are disputed. Or, they can painstakingly edit and re-edit videos, or avoid including almost any music whatsoever and hope that Content ID doesn’t register a match on static or a cat’s purr.

While any creator has the right to use copyrighted material without paying rightsholders in circumstances where fair use applies, Content ID routinely diverts money away from creators like these to rightsholders in the name of policing infringement. Fair use is an exercise of your First Amendment rights, but Content ID forces you to pay for that right. WatchMojo, one of the largest YouTube channels, estimated that over six years, roughly two billion dollars in ads have gone to rightsholders instead of creators. YouTube does not shy away from this effect. In its 2018 report “How Google Fights Piracy,” the company declares that “the size and efficiency of Content ID are unparalleled in the industry, offering an efficient way to earn revenue from the unanticipated, creative ways that fans reuse songs and videos.” In other words, Content ID allows rightsholders to take money away from creators who are under no obligation to obtain a license for their lawful fair uses.

That doesn’t even include the times these filters just get things completely wrong. Just the other week, a programmer live-streamed his typing and a claim was made for the sound of “typing on a modern keyboard.” A recording of static got five separate notices placed on it by the automated filter. These things don’t work.

YouTube also encourages people to simply use only the things that they have a license for or are in a library of free resources. That ignores that there is a fair use right to use copyrighted material in certain cases, and lets companies argue that no one has to use their work without paying since these free options exist.

So, when the labels make a lot of disingenuous noise about how inadequate the DMCA is and how platforms need to do more, they have YouTube to point to as a “voluntary” system that should be replicated. And companies will fold, especially if they end up being inundated with DMCA takedowns—some bogus—and if they think the other option is being required to do it by law, the implicit threat of a letter like the one Twitter received.

This tactic works. Twitch found itself buried under DMCA takedowns last year, handled that poorly, and then found itself being, like Twitter, blamed for taking money out of the hands of musicians by the RIAA. Twitch now makes removing music and claimed bits of videos easier, has adopted a similar repeat infringer policy to YouTube’s, and makes deleting clips easier for users. Snap, owner of Snapchat, went the route of getting a license, paying labels to make music available to its users.

Creating a norm of licensed or free music, monetization, or automated filters functionally eviscerates fair use. Even if people have the right to use something, they won’t be able to. On YouTube, reviewers don’t use the clips of the music or movies that are the best example of what they’re talking about—they pick whatever will satisfy the filter. That is not the model we want as a baseline. The baseline should be more protective of legal speech, not less.

Unfortunately, when the tech companies are facing off against the largest rightsholders, it's users who most often lose. Twitter is only the latest target, we hope they become the one to stand up for its users.

This Captcha Patent Is An All-American Nightmare

Fri, 08/06/2021 - 2:40pm

A newly formed patent troll is looking for big money from small business websites, just for using free, off-the-shelf login verification tools. 

Defenders of the American Dream, LLC (DAD ), is sending out its demand letters to websites that use Google’s reCAPTCHA system, accusing them of infringing U.S. Patent No. 8,621,578. Google’s reCAPTCHA is just one form of a Captcha test, which describes a wide array of test systems that websites use to verify human users and keep out bots. 

DAD’s letter tells targeted companies that DAD will take an $8,500 payment, but only if “licensing terms are accepted immediately.” The threat escalates from there. If anyone dares to respond that DAD’s patent might be not infringed, or invalid, fees will rise to at least $17,000. If DAD’s patent gets subject to a legal challenge, DAD says they’ll increase their demand to at least $70,000. In the footnotes, DAD advises its targets that “not-for-profit entities are eligible for a discount.” 

The DAD demand letters we have reviewed are nearly identical, with the same fee structure. They mirror the one filed by the company itself (with the fee structure redacted) as part of their trademark application. This demand letter campaign is a perfect example of how the U.S. patent system fails to advance software innovation. Instead, our system enables extortionate behavior like DAD’s exploding fee structure. 

DAD Didn't Invent Image Captcha

DAD claims it invented a novel and patentable image-based Captcha system. But there’s ample evidence of image-based Captcha tests that predate DAD’s 2008 patent application. 

The term “Captcha” was coined by a group of researchers at Carnegie Mellon University in 2000. It’s an acronym, indicating a “Completely Automated Public Turing test to tell Computers and Humans Apart.” Essentially, it blocks automated tools like bots from getting into websites. Such tests have been important since the earliest days of the Internet. 

Early Captcha tests used squiggly lines or wavy text. The same group of CMU researchers who coined “Captcha” went on to work on an image-selection version they called ESP-PIX, which they had published and made public by 2005. 

By 2007, Microsoft had developed its own image-categorization Captcha, which used photos from Petfinder.com, then asked users to identify cats and dogs. At the sime time, PayPal was working on new captchas that “might resemble simple image puzzles.” This was no secret—researchers from both companies spoke to the New York Times about their research, and Microsoft filed its own patent application, more than a year before DAD’s. 

There’s also evidence of earlier image-based Captcha tests in the patent record, like this early 2008 application from a company called Binary Monkeys. Here's an image from the Binary Monkeys Patent: 

And here's an image from DAD's patent: 

So how did DAD end up with this patent? During patent prosecution, DAD’s predecessor argued that they had a novel invention because the Binary Monkeys application asks users to select “all images” associated with the task, as opposed to selecting “one image,” as in DAD’s test. The patent examiner suggested adding yet another limitation: that the user still be granted access to the website if they got one “known” image and one “suspected” image. 

Unfortunately, adding trivial tweaks to existing technology, such as small details about the needed criteria for passing a Captcha test, can and often does result in a patent being granted. This was especially true back in 2008, before patent examiners should have applied guidance from the Supreme Court’s 2014 Alice v. CLS Bank decision. That’s why we have told the patent office to vigorously uphold Supreme Court guidelines, and have defended the Alice precedent in Congress.  

Where did DAD come from? 

DAD’s patent was originally filed by a Portland startup called Vidoop. In 2010, Vidoop and its patent applications were purchased by a San Diego investor who re-branded it as Confident Technologies. Confident Tech offered a “clickable, image-based CAPTCHA,” but ultimately didn’t make it as a business. In 2017 and 2018, Confident Tech sued Best Buy, Fandango Media, Live Nation, and AXS Group, claiming that the companies infringed its patent by using reCAPTCHA. Those cases all settled.

In 2020, Trevor Coddington, an attorney who worked on Confident Tech’s patent applications, created Defenders of the American Dream LLC. He transferred the patents to this new entity and started sending out demand letters. 

They haven’t all gone to large companies, either. At least one of DAD’s targets has been a one-person online publishing company. Coddington’s letter complains about how Confident Tech failed in the marketplace and suggests that because of this, reCAPTCHA users should pay—well, him. The letter states: 

[O]nce Google introduced its image-based reCAPTCHA for free, no less, [Confident Technologies] was unable to to maintain a financially viable business… Google’s efficient infringement forced CTI to abandon operations and any return on the millions of dollars of capital investment used to develop its patented solutions. Meanwhile, your company obtained and utilized the patented technology for free.” 

Creating new and better Captcha software is an area of ongoing research and innovation. While the lawyers and investors behind DAD have turned to patent threats to make money, other developers are actively innovating and competing with reCAPTCHA. There are competing image-based Captchas like hCaptcha and visualCaptcha, as well as long lists of Captcha alternatives and companies that are trying to make Captchas obsolete

These individuals and companies are all inventive, but they’re not relying on patent threats to make a buck. They’ve actually written code and shared it online. Unfortunately, because of their real contributions, they’re more likely to end up the victims of aggressive patent-holders like DAD. 

We’ll never patent our way to a better Captcha. Looking at the history of the DAD patent—which shares no code at all—makes it clear why the patent system is such a bad fit for software. 

Related documents: 

Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life

Thu, 08/05/2021 - 3:40pm

Apple has announced impending changes to its operating systems that include new “protections for children” features in iCloud and iMessage. If you’ve spent any time following the Crypto Wars, you know what this means: Apple is planning to build a backdoor into its data storage system and its messaging system.

Child exploitation is a serious problem, and Apple isn't the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.

To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security.

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

When Apple releases these “client-side scanning” functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.

Apple Is Opening the Door to Broader Abuses

We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.

That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of “misinformation” in 24 hours may apply to messaging services. And many other countries—often those with authoritarian governments—have passed similar laws. Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.

Image Scanning on iCloud Photos: A Decrease in Privacy

Apple’s plan for scanning photos that get uploaded into iCloud Photos is similar in some ways to Microsoft’s PhotoDNA. The main product difference is that Apple’s scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold.

Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user’s account disabled. Again, the bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned.

Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement.

Currently, although Apple holds the keys to view Photos stored in iCloud Photos, it does not scan these images. Civil liberties organizations have asked the company to remove its ability to do so. But Apple is choosing the opposite approach and giving itself more knowledge of users’ content.

Machine Learning and Parental Notifications in iMessage: A Shift Away From Strong Encryption

Apple’s second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage. To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect “sexually explicit images.” According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account. In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.

Similarly, if the under-13 child receives an image that iMessage deems to be “sexually explicit”, before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent’s device.

This means that if—for instance—a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be “explicit” or that the recipient’s parent will be notified. The recipient’s parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.

Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.

These notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.

It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly “sexually explicit” content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook’s attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen’s Little Mermaid. These filters have a history of chilling expression, and there’s plenty of reason to believe that Apple’s will do the same.

Since the detection of a “sexually explicit image” will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage “end-to-end encrypted.” Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the “end-to-end” promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company’s stance toward strong encryption.

Whatever Apple Calls It, It’s No Longer Secure Messaging

As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that’s not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users’ devices or send notifications to a wider audience, easily censoring and chilling speech.

But even without such expansions, this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet’s potential for expanding the world of those whose lives would otherwise be restricted. And because family sharing plans may be organized by abusive partners, it's not a stretch to imagine using this feature as a form of stalkerware.

People have the right to communicate privately without backdoors or censorship, including when those people are minors. Apple should make the right decision: keep these backdoors off of users’ devices.

We Have Questions for DEF CON's Puzzling Keynote Speaker, DHS Secretary Mayorkas

Thu, 08/05/2021 - 2:18pm

The Secretary of Homeland Security Alejandro Mayorkas will be giving a DEF CON keynote address this year. Those attending this weekend’s hybrid event will have a unique opportunity to “engage” with the man who heads the department responsible for surveillance of immigrants, Muslims, Black activists, and other marginalized communities. We at EFF, as longtime supporters of the information security community and having stood toe-to toe with government agencies including DHS, have thoughts on areas where Secretary Mayorkas must address digital civil liberties and human rights. So we thought it prudent to suggest some questions you might ask.

If you’re less than optimistic about getting satisfying answers to these from the Secretary, here are some organizations who are actively working to protect the rights of people targeted by the Department of Homeland Security:

____________

Learn more about EFF's virtual participation in DEF CON 29 including EFF Tech Trivia, our annual member shirt puzzle, and a special presentation with author and Special Advisor Cory Doctorow.

16 Civil Society Organizations Call on Congress to Fix the Cryptocurrency Provision of the Infrastructure Bill

Thu, 08/05/2021 - 2:10pm

The Electronic Frontier Foundation, Fight for the Future, Defending Rights and Dissent and 13 other organizations sent a letter to Senators Charles Schumer (D-NY), Mitch McConnell (R-KY), and other members of Congress asking them to act swiftly to amend the vague and dangerous digital currency provision of Biden’s infrastructure bill.

The fast-moving, must-pass legislation is over 2,000 pages and primarily focused on issues such as updating America’s highways and digital infrastructure. However, included in the “pay-for” section of the bill is a provision relevant to cryptocurrencies that includes a new, vague, and expanded definition of what constitutes a “broker” under U.S. tax law. As EFF described earlier this week, this vaguely worded section of the bill could be interpreted to mean that many actors in the cryptocurrency space—including software developers who merely write and publish code, as well as miners who verify cryptocurrency transactions—would suddenly be considered brokers, and thus need to collect and report identifying information on their users.

In the wake of heated opposition from the technical and civil liberties community, some senators are taking action. Senators Wyden, Loomis, and Toomey have introduced an amendment that seeks to ensure that some of the worst interpretations of this provision are excluded. Namely, the amendment would seek to clarify that miners, software developers who do not hold assets for customers, and those who create hardware and software to support consumers in holding their own cryptocurrency would not be implicated under the new definition of broker.

We have already seen how digital currency supports independent community projects, routes around financial censorship, and supports independent journalists around the world. Indeed, the decentralized nature of digital currency is allowing cryptographers and programmers to experiment with more privacy-protective exchanges, and to offer alternatives for those who wish to protect their financial privacy or those who have been subject to financial censorship

The privacy rights of cryptocurrency users is a complex topic. Properly addressing such an issue requires ample opportunity for civil liberties experts to offer feedback on proposals. But there has been no opportunity to do that in the rush to fund this unrelated bill. That’s why the coalition that sent the letter—which includes national groups and local groups representing privacy advocates, journalists, technologists, and cryptocurrency users—shares a common concern about this provision's push to run roughshod over this nuanced issue.

The Wyden-Lummis-Toomey Amendment removes reporting obligations from network participants who don’t have, and shouldn’t have, access to customer information. It does so without affecting the reporting obligations placed on brokers and traders of digital assets.

Read full letter here: https://www.eff.org/document/civil-society-letter-wyden-lummis-and-toomey-amendment-cryptocurrency-provisions

Utilities Governed Like Empires

Tue, 08/03/2021 - 5:06pm
Believe the hype

After decades of hype, it’s only natural for your eyes to skate over corporate mission-statements without stopping to take note of them, but when it comes to ending your relationship with them,  tech giants’ stated goals take on a sinister cast.

Whether it’s “bringing the world closer together” (Facebook), “organizing the world’s information” (Google), to be a market “where customers can find and discover anything they might want to buy online” (Amazon) or “to make personal computing accessible to each and every individual” (Apple), the founding missions of tech giants reveal a desire to become indispensable to our digital lives.

They’ve succeeded. We’ve entrusted these companies with our sensitive data, from family photos to finances to correspondence. We’ve let them take over our communities, from medical and bereavement support groups to little league and service organization forums. We’ve bought trillions of dollars’ worth of media from them, locked in proprietary formats that can’t be played back without their ongoing cooperation.

These services often work great...but they fail very, very badly. Tech giants can run servers to support hundreds of millions or billions of users - but they either can’t or won’t create equally user-centric procedures for suspending or terminating those users.

But as bad as tech giants’ content removal and account termination policies are, they’re paragons of sense and transparency when compared to their appeals processes. Many who try to appeal a tech company’s judgment quickly find themselves mired in a Kafkaesque maze of automated emails (to which you often can’t reply), requests for documents that either don’t exist or have already been furnished on multiple occasions, and high-handed, terse “final judgments” with no explanations or appeal.

The tech giants argue that they are entitled to run their businesses largely as they see fit: if you don’t like the house rules, just take your business elsewhere. These house rules are pretty arbitrary: platforms’ public-facing moderation policies are vaguely worded and subject to arbitrary interpretation, and their account termination policies are even more opaque. 

Kafka Was An Optimist

All of that would be bad enough, but when it is combined with the tech companies’ desire to dominate your digital life and become indispensable to your daily existence, it gets much worse.

Losing your cloud account can cost you decades of your family photos. Losing access to your media account can cost you access to thousands of dollars’ worth of music, movies, audiobooks and ebooks. Losing your IoT account can render your whole home uninhabitable, freezing the door locks while bricking your thermostat, burglar alarm and security cameras. 

But really, it’s worse than that: you will incur multiple losses if you get kicked off just one service. Losing your account with Amazon, Google or Apple can cost you access to your home automation and security, your mobile devices, your purchased ebooks/audiobooks/movies/music, and your photos. Losing your Apple or Google account can cost you decades’ worth of personal correspondence - from the last email sent by a long-dead friend to that file-attachment from your bookkeeper that you need for your tax audit. These services are designed to act as your backup - your offsite cloud, your central repository - and few people understand or know how to make a local copy of all the data that is so seamlessly whisked from their devices onto big companies’ servers.

In other words, the tech companies set out to make us dependent on them for every aspect of our online lives, and they succeeded - but when it comes to kicking you off their platforms, they still act like you’re just a bar patron at last call, not someone whose life would be shattered if they cut you off.

YouTubers Warned Us

This has been brewing for a long time. YouTubers and other creative laborers have long suffered under a system where the accounts on which they rely to make their livings could be demonetized, suspended or deleted without warning or appeal. But today, we’re all one bad moderation call away from having our lives turned upside-down.

The tech giants’ conquest of our digital lives is just getting started. Tech companies want to manage our health, dispense our medication, take us to the polls on election day, televise our political debates and teach our kids. Each of these product offerings comes with grandiose pretensions to total dominance - it’s not enough for Amazon Pharmacy to be popular, it will be the most popular, leveraging Amazon’s existing business to cut off your corner druggist’s market oxygen (Uber’s IPO included a plan to replace all the world’s public transit and taxi vehicles with rideshares). 

If the tech companies deliver on their promises to their shareholders, then being locked out of your account might mean being locked out of whole swathes of essential services, from buying medicine to getting to work.

Well, How Did We Get Here?

How did the vibrant electronic frontier become a monoculture of “five websites, each consisting of screenshots of text from the other four?” 

It wasn’t an accident. Tech, copyright, contract and competition policy helped engineer this outcome, as did VCs and entrepreneurs who decided that online businesses were only worth backing if they could grow to world-dominating scale.

Take laws like Section 1201 of the Digital Millennium Copyright Act, a broadly worded prohibition on tampering with or removing DRM, even for lawful purposes. When Congress passed the DMCA in 1998, they were warned that protecting DRM - even when no copyright infringement took place - would leave technology users at the mercy of corporations. You may have bought your textbooks or the music you practice piano to, but if it’s got DRM and the company that sold it to you cuts you off, the DMCA does not let you remove that DRM (say goodbye to your media). 

Companies immediately capitalized upon this dangerously broad law: they sold you media that would only play back on the devices they authorized. That locked you into their platform and kept you from defecting to a rival, because you couldn’t take your media with you. 

But even as DRM formats proliferated, the companies that relied on them continued to act like kicking you off their platforms was like the corner store telling you to buy your magazines somewhere else - not like a vast corporate empire of corner stores sending goons  to your house to take back every newspaper, magazine and paperback you ever bought there, with no appeal.

It’s easy to see how the DMCA and DRM give big companies far-reaching control over your purchases, but other laws have had a similar effect. The Computer Fraud and Abuse Act (CFAA), another broadly worded mess of a law, is so badly drafted that tech companies were able to claim for decades that simply violating their terms of service could be  a crime - a chilling claim that was only put to rest by the Supreme Court this summer.

From the start, tech lawyers and the companies they worked for set things up so that most of the time, our digital activities are bound by contractual arrangements, not ownership. These are usually mass contracts, with one-sided terms of service. They’re end user license agreements that ensure that the company has a simple process for termination without any actual due process, much less strong remedies if you lose your data or the use of your devices.  

CFAA, DMCA, and other rules allowing easy termination and limiting how users and competitors could reconfigure existing technology created a world where doing things that displeased a company’s shareholders could literally be turned into a crime - a kind of “felony contempt of business-model.” 

These kinds of shady business practices wouldn’t have been quite so bad if there were a wide variety of small firms that allowed us to shop around for a better deal. 

Unfortunately, the modern tech industry was born at the same moment as American antitrust law was being dismantled - literally. The Apple ][+ appeared on shelves the same year Ronald Reagan hit the campaign trail. After winning office, Reagan inaugurated a 40-year, bipartisan project to neuter antitrust law, allowing incumbents to buy and crush small companies before they could grow to be threats; letting giant companies merge with their direct competitors, and looking the other way while companies established “vertical monopolies” that controlled their whole supply chains.

Without any brakes, the runaway merger train went barrelling along, picking up speed. Today’s tech giants buy companies more often than you buy groceries, and it has turned the tech industry into a “kill-zone” where innovative ideas go to die.

How is it that you can wake up one day and discover you’ve lost your Amazon account, and get no explanation? How is that this can cost you the server you run your small business on, a decade of family photos, the use of your ebook reader and mobile phone, and access to your entire library of ebooks, movies and audiobooks? 

Simple. 

Amazon is in so many parts of your life because it was allowed to merge with small competitors, create vertical monopolies, wrap its media with DRM - and never take on any obligations to be fair or decent to customers it suspected of some unspecified wrongdoing. 

Not just Amazon, either - every tech giant has an arc that looks like Amazon’s, from the concerted effort to make you dependent on its products, to the indifferent, opaque system of corporate “justice” governing account termination and content removal.

Fix the Tech Companies

Companies should be better. Moderation decisions should be transparent, rules-based, and follow basic due process principles. All of this - and more - has been articulated in detail by an international group of experts from industry, the academy, and human rights activism, in an extraordinary document called The Santa Clara Principles. Tech companies should follow these rules when moderating content, because even if they are free to set their own house rules, the public has the right to tell them when those rules suck and to suggest better ones.

If a company does kick you off its platform - or if you decide to leave - they shouldn’t be allowed to hang onto your data (or just delete it). It’s your data, not theirs. The concept of a “fiduciary” - someone with a duty to “act in good faith” towards you - is well-established. If you fire your lawyer (or if they fire you as a client), they have to give you your files. Ditto your doctor or your mental health professional. 

Many legal scholars have proposed creating “information fiduciary” rules that create similar duties for firms that hold your data. This would impose a “duty of loyalty” (to act in the best interests of their customers, without regard to the interests of the business), and a “duty of care” (to act in the manner expected by a reasonable customer under the circumstances). 

Not only would this go a long way to resolving the privacy abuses that plague our online interactions - it would also guarantee you the right to take your data with you when you left a service, whether that departure was your idea or not. 

Information fiduciary isn’t the only way to get companies to be responsible. Direct consumer protection laws -- such as requiring companies to make your content readily available to you in the event of termination -- could too (there are other approaches as well).  How these rules would apply would depend on the content they host as well as the size of the business you’re dealing with - small companies would struggle to meet the standards we’d expect of giant companies. But every online service should have some duties to you - if the company that just kicked you off its servers and took your wedding photos hostage is a two-person operation, you still want your pictures back!

Fix the Internet

Improving corporate behavior is always a laudable goal, but the real problem with giant companies that are entwined in your life in ways you can’t avoid isn’t that those companies wield their incredible power unwisely. It’s that they have that power in the first place.

To give power to internet users, we have to take it away from giant internet companies. The FTC - under new leadership - has pledged that it will end decades of waving through anticompetitive mergers. That’s just for openers, though. Competition scholars and activists have made the case for the harder task of  breaking up the giants, literally cutting them down to size.

But there’s more.  Congress is considering the ACCESS Act, landmark legislation that would force the largest companies to interoperate with privacy-respecting new rivals, who’d be banned from exploiting user data. If the ACCESS Act passes, it will dramatically lower the high switching costs that keep us locked into big platforms even though we don’t like the way they operate. It also protects folks who want to develop tools to make it easier for you to take your data when you leave, whether voluntarily or because your account is terminated. 

That’s how we’ll turn the internet back into an ecosystem of companies, co-ops and nonprofits of every size that can take receipt of your data, and offer you an online base of operations from which you can communicate with friends, communities and customers regardless of whether they’re on the indieweb or inside a Big Tech silo.

That still won’t be enough, though. The fact that terms of service, DRM, and other technologies and laws can prevent third parties from supplying software for your phone, playing back the media you’ve bought, and running the games you own still gives big companies too much leverage over your digital life.

That’s why we need to restore the right to interoperate, in all its guises: competitive compatibility (the right to plug new products and services into existing ones, with or without permission from their manufacturers), bypassing DRM (we’re suing to make this happen!), the right to repair (a fight we’re winning!) and an end to abusive terms of service (the Supreme Court got this one right).

Digital Rights are Human Rights

When we joined this fight,  30 long years ago, very few people got it. Our critics jeered at the very idea of “digital rights” - as if the nerdfights over Star Trek forums could somehow be compared to history’s great struggles for self-determination and justice! Even a decade ago, the idea of digital rights was greeted with jeers and skepticism.

But we didn’t get into this to fight for “digital rights” - we’re here to defend human rights. The merger of the “real world” and the “virtual world” could be argued over in the 1990s, but not today, not after a lockdown where the internet became the nervous system for the planet, a single wire we depended on for free speech, a free press, freedom of assembly, romance, family, parenting, faith, education, employment, civics and politics.

Today, everything we do involves the internet. Tomorrow, everything will require it. We can’t afford to let our digital citizenship be reduced to a heavy-handed mess of unreadable terms of service and broken appeals processes.

We have the right to a better digital future - a future where the ambitions of would-be monopolists and their shareholders take a back-seat to fairness, equity, and your right to self-determination.

Flex Your Power. Own Your Tech.

Tue, 08/03/2021 - 12:26pm

Before advanced computer graphics, a collection of clumsy pixels would represent ideas far more complex than technology could capture on its own. With a little imagination, crude blocks on a screen could transform into steel titans and unknown worlds. It’s that spirit of creativity and vision that we celebrate each year at the Las Vegas hacker conferences—BSidesLV, Black Hat, and DEF CON—and beyond.

The Electronic Frontier Foundation has advised tinkerers and security researchers at conferences like these for decades because human ingenuity is faster and hungrier than the contours of the law. Copyright, patent, and hacking statutes often conflict with legitimate activities for ordinary folks, driving EFF to help fill the all-too-common gap in people's understanding of technology and the law. Thankfully, support from the public has allowed EFF to continue leading efforts to even the playing field for everyone. It brings us all closer to EFF's ambitious, and increasingly urgent, view of the future: one where creators keep civil liberties and human rights at the center of technology. You can help us build that future as an EFF member.

Stand with EFF

Join EFF and Protect Online Freedom

In honor of this week's hacker conferences, EFF’s annual mystery-filled DEF CON t-shirt is available to everyone, but it won’t last long! Our DC29 Pixel Mech design is a reminder that simple ideas can have colossal potential. Like previous years' designs, there's more than meets the eye.

EFF members' commitment to tech users and creators is more necessary each day. Together we've been able to develop privacy-enhancing tools like Certbot to encrypt more of the web; work with policymakers to support fiber broadband infrastructure; beat back dangerous and invasive public-private surveillance partnerships; propose user-focused solutions to big tech strangleholds on your free expression and consumer choice; and rein in oppressive tech laws like the CFAA, which we just fought, and won, in U.S. Supreme Court.

Just as a good hacker sees worlds of possibility in plastic, metal, and pixels, we must all envision and work for a future that’s better than what we’re given. It doesn't matter whether you're an OG cyberpunk phreaker or you just enjoy checking out the latest viral dance moves: we all benefit from a web that empowers users and supports curiosity and creativity online. Support EFF's vital work this year!


Viva Las Vegas, wherever you are.

___________________

EFF is a U.S. 501(c)(3) nonprofit with a top rating from Charity Navigator. Your gift is tax-deductible as allowed by law. You can even support EFF all year with a convenient monthly donation!

The Cryptocurrency Surveillance Provision Buried in the Infrastructure Bill is a Disaster for Digital Privacy

Mon, 08/02/2021 - 5:35pm

The forthcoming Senate draft of Biden's infrastructure bill—a 2,000+ page bill designed to update the United States’ roads, highways, and digital infrastructure—contains a poorly crafted provision that could create new surveillance requirements for many within the blockchain ecosystem. This could include developers and others who do not control digital assets on behalf of users.

While the language is still evolving, the proposal would seek to expand the definition of “broker” under section 6045(c)(1) of the Internal Revenue Code of 1986 to include anyone who is “responsible for and regularly providing any service effectuating transfers of digital assets” on behalf of another person. These newly defined brokers would be required to comply with IRS reporting requirements for brokers, including filing form 1099s with the IRS. That means they would have to collect user data, including users’ names and addresses.

The broad, confusing language leaves open a door for almost any entity within the cryptocurrency ecosystem to be considered a “broker”—including software developers and cryptocurrency startups that aren’t custodying or controlling assets on behalf of their users. It could even potentially implicate miners, those who confirm and verify blockchain transactions. The mandate to collect names, addresses, and transactions of customers means almost every company even tangentially related to cryptocurrency may suddenly be forced to surveil their users. 

How this would work in practice is still very much an open question. Indeed, perhaps this extremely broad interpretation was not even the intent of the drafters of this language. But given the rapid timeline for the bill’s likely passage, those answers may not be resolved before it hits the Senate floor for a vote.

Some may wonder why an infrastructure bill primarily focused on topics like highways is even attempting to address as complex and evolving a topic as digital privacy and cryptocurrency. This provision is actually buried in the section of the bill relevant to covering the costs of the other proposals. In general, bills that seek to offer new government services must explain how the government will pay for those services. This can be done through increasing taxes or by somehow improving tax compliance. The cryptocurrency provision in this bill is attempting to do the latter. The argument is that by engaging in more rigorous surveillance of the cryptocurrency community, the Biden administration will see more tax revenue flow in from this community without actually increasing taxes, and thus be able to cover $28 billion of its $2 trillion infrastructure plan. Basically, it’s presuming that huge swaths of cryptocurrency users are engaged in mass tax avoidance, without providing any evidence of that.

Make no mistake: there is a clear and substantial harm in ratcheting up financial surveillance and forcing more actors within the blockchain ecosystem to gather data on users. Including this provision in the infrastructure bill will: 

  • Require new surveillance of everyday users of cryptocurrency;
  • Force software creators and others who do not custody cryptocurrency for their users to implement cumbersome surveillance systems or stop offering services in the United States;
  • Create more honeypots of private information about cryptocurrency users that could attract malicious actors; and
  • Create more legal complexity to developing blockchain projects or verifying transactions in the United States—likely leading to more innovation moving overseas.

Furthermore, it is impossible for miners and developers to comply with these reporting requirements; these parties have no way to gather that type of information. 

The bill could also create uncertainty about the ability to conduct cryptocurrency transactions directly with others, via open source code (e.g. smart contracts and decentralized exchanges), while remaining anonymous. The ability to transact directly with others anonymously is fundamental to civil liberties, as financial records provide an intimate window into a person's life.

This poor drafting appears to be yet another example of lawmakers failing to understand the underlying technology used by cryptocurrencies. EFF has long advocated for Congress to protect consumers by focusing on malicious actors engaged in fraudulent practices within the cryptocurrency space. However, overbroad and technologically disconnected cryptocurrency regulation could do more harm than good. Blockchain projects should serve the interests and needs of users, and we hope to see a diverse and competitive ecosystem where values such as individual privacy, censorship-resistance, and interoperability are designed into blockchain projects from the ground up. Smart cryptocurrency regulation will foster this innovation and uphold consumer privacy, not surveil users while failing to do anything meaningful to combat fraud.

EFF has a few key concepts we’ve urged Congress to adopt when developing cryptocurrency regulation, specifically that any regulation:

  • Should be technologically neutral;
  • Should not apply to those who merely write and publish code;
  • Should provide protections for individual miners, merchants who accept cryptocurrencies, and individuals who trade in cryptocurrency as consumers;
  • Should focus on custodial services that hold and trade assets on behalf of users;
  • Should provide an adequate on-ramp for new services to comply;
  • Should recognize the human right to privacy;
  • Should recognize the important role of decentralized technologies in empowering consumers;
  • Should not chill future innovation that will benefit consumers.

The poorly drafted provision in Biden’s infrastructure bill fails our criteria across the board.

The Senate should act swiftly to modify or remove this dangerous provision. Getting cryptocurrency regulation right means ensuring an opportunity for public engagement and nuance—and the breakneck timeline of the infrastructure bill leaves no chance for either.

DHS’s Flawed Plan for Mobile Driver’s Licenses

Thu, 07/29/2021 - 6:39pm

Digital identification can invade our privacy and aggravate existing social inequities. Designed wrong, it might be a big step towards national identification, in which every time we walk through a door or buy coffee, a record of the event is collected and aggregated. Also, any system that privileges digital identification over traditional forms will disadvantage people already at society’s margins.

So, we’re troubled by proposed rules on “mobile driver’s licenses” (or “mDLs”) from the U.S. Department of Homeland Security. And we’ve joined with the ACLU and EPIC to file comments that raise privacy and equity concerns about these rules. The stakes are high, as the comments explain:

By making it more convenient to show ID and thus easier to ask for it, digital IDs would inevitably make demands for ID more frequent in American life. They may also lead to the routine use of automated or “robot” ID checks carried out not by humans but by machines, causing such demands to proliferate even more. Depending on how a digital ID is designed, it could also allow centralized tracking of all ID checks, and raise other privacy issues. And we would be likely to see demands for driver’s license checks become widespread online, which would enormously expand the tracking information such ID checks could create. In the worst case, this would make it nearly impossible to engage in online activities that aren’t tied to our verified, real-world identities, thus hampering the ability to engage in constitutionally protected anonymous speech and facilitating privacy-destroying persistent tracking of our activities and associations.

Longer-term, if digital IDs replace physical documents entirely, or if physical-only document holders are placed at a disadvantage, that could have significant implications for equity and fairness in American life. Many people do not have smartphones, including many from our most vulnerable communities. Studies have found that 15 percent of the population does not own a smartphone, including almost 40 percent of people over 65 and 24 percent of people who make less than $30,000 a year.

Finally, we are concerned that the DHS proposal layers REAL ID with mDL. REAL ID has many privacy problems, which should not be carried over into mDLs. Moreover, if a person had an mDL issued by a state DMV, that would address forgery and cloning concerns, without the need for REAL ID and its privacy problems.

Texas AG Paxton's Retaliatory Investigation of Twitter Goes to Ninth Circuit

Thu, 07/29/2021 - 5:21pm

Governments around the world are pressuring websites and social media platforms to publish or censor particular speech with investigations, subpoenas, raids, and bans. Burdensome investigations are enough to coerce targets of this retaliation into following a government’s editorial line. In the US, longstanding First Amendment law recognizes and protects people from this chilling effect. In an amicus brief filed July 23, EFF, along with the Center for Democracy and Technology and other partner organizations, urged the Ninth Circuit Court of Appeals to apply this law and protect Twitter from a retaliatory investigation by Texas Attorney General Ken Paxton.

After Twitter banned then-President Trump following the January 6 riots at the U.S. Capitol, Paxton issued a Civil Investigative Demand (CID) to Twitter (and other major online platforms) for, among other things, any documents relating to its terms of use and content moderation practices. The CID alleged "possible violations" of Texas's deceptive practices law.

The district court allowed Paxton's investigation to proceed because, even if it was retaliatory, Paxton would have to go to court to enforce the CID: Twitter had to let the investigation play out before it could sue. But as our brief says, courts have recognized that “even pre-enforcement, threatened punishment of speech has a chilling effect.” You don't have to wait to go to court when your free expression is inhibited.

Access to online platforms with different rules and environments generally benefits users, though the brief also points out that EFF and partner organizations have criticized platforms for removing benign posts, censoring human rights activists and journalists, and other bad content moderation practices. We have to address those mistakes, but not through chilling government investigations.

The Bipartisan Broadband Bill: Good, But It Won’t End the Digital Divide

Thu, 07/29/2021 - 3:53pm

The U.S. Senate is on the cusp of approving an infrastructure package, which passed a critical first vote last night by 67-32. Negotiations on the final bill are ongoing, but late yesterday NBC News had the draft broadband provisions. There is a lot to like in it, some of which will depend on decisions by the state governments and the Federal Communications Commission (FCC), and some drawbacks. Assuming that what was released makes it into the final bill, here is what to expect.

Not Enough Money to Close the Digital Divide Across the U.S.

We have long advocated for, backed up by evidence, a plan that would connect every American to fiber. It is a vital part of any nationwide communications policy that intends to actually function in the 21st century. The future is clearly heading towards more symmetrical uses, that will require more bandwidth at very low latency. Falling short of that will inevitably create a new digital divide, this one between those with 21st-century access and those without. Fiber-connected people will head towards the cheaper symmetrical multi-gigabit era while others are stuck on capacity-constrained expensive legacy wires. This “speed chasm” will create a divide between those who can participate in an increasingly remote, telecommuting world and those who cannot.

Most estimates put the price tag of universal fiber at $80 to $100 billion, but this bipartisan package proposes only $40 billion in total for construction. It’s pretty obvious that this shortfall will prevent many areas from the funding they need to deliver fiber--or really any broadband access—to the millions of Americans in need of access.

While Congress can rectify this shortfall in the future with additional infusions of funding, as well as a stronger emphasis on treating fiber as an infrastructure, versus purely a broadband service. But it should be clear what it means to not do so now. Some states will do very well under this proposal, by having the federal efforts complement already existing state efforts. For example, California already has a state universal fiber effort underway that recruits all local actors to work with the state to deliver fiber infrastructure. More federal dollars will just augment an already very good thing there. But other states may, unfortunately, get duped into building out or subsidizing slow networks that will inevitably need to be replaced. That will cost the state and federal government more money in the end. This isn’t fated to happen, but it’s a risk invited by the legislation’s adoption of 100/20 Mbps as the build-out metric instead of 100/100 Mbps.

Protecting the Cable Monopolies Instead of Giving Us What We Need

Lobbyists for the slow legacy internet access companies descended on Capitol Hill with a range of arguments trying to dissuade Congress from creating competition in neglected markets, which in turn would force existing carriers to provide better service. Everyone will eventually need access to fiber-optic infrastructure. Our technical analysis has made clear that fiber is the superior medium for 21st-century broadband, which is why government infrastructure policy needs to be oriented around pushing fiber into every community.

Even major wireless industry players agree now that fiber is “inextricably linked” with future high-speed wireless connectivity. But all of this was very inconvenient for existing legacy monopolies. Most noteworthy, cable stood to lose if too many people got very fast cheaper internet from someone else. The legislation includes provisions to effectively insulate the underinvested cable monopoly markets from federal dollars. That, arguably, is the worst outcome here.

By defining internet access as the ability to get 100/20 Mbps service, the draft language allows cable monopolies to argue that anyone with access to ancient, insufficient internet access does not need federal money to build new infrastructure. That means communities with nearly decade-old DOCSIS 3.0 broadband are shielded from federal dollars from being used to build fiber. Copper-DSL-only areas, and areas entirely without broadband, will likely take the lion’s share of the $40 billion made available. In addition to rural areas, pockets of urban markets where people are still lacking broadband will qualify. This will lead to an absurd result: people on inferior, too-expensive cable services will be seen as equally served as their neighbors who will get federally funded fiber.

The Future-Proofing Criteria Is Essential to Help Avoid Wasting These Investments

The proposal establishes a priority (not a mandate) for future-proof infrastructure, which is essential to avoid the 100/20 Mbps speed, or something close to it, from becoming standard. Legacy industry was fond of telling Congress to be “technology neutral” in its policy, when really they were asking Congress to create a program that subsidized their obsolete connections by lowering the bar. The future-proofing provision helps avoid that outcome though by establishing federal priorities of the broadband projects being funded (see below).

This is where things will be challenging in the years to come. The Biden Administration has been crystal clear about the link between fiber infrastructure and future-proofing per its Treasury guidelines that implemented the broadband provisions of the American Rescue Plan. But the bipartisan bill gives a lot of discretion to the states to distribute the funds. Without a doubt, the same lobby that descended on Congress to argue against 100/100 Mbps will attempt to grift state governments into believing any infrastructure will deliver these goals. That is just not true as a matter of physics.  States that understand this will push fiber, and are given the flexibility to do so here.

Digital Discrimination Rules

Under the section titled “digital discrimination,” the bill requires the FCC to establish what it means to have equal access to broadband and, more importantly, what a carrier would have to do to violate such a requirement. This provision carries major possibilities but is dependent on who the president nominates to run the FCC, as it will be their responsibility for setting the rules. If done right, it can set the stage for addressing digital redlining in certain urban communities, and push fiber on equitable terms.

If they get the right regulation, the most direct beneficiaries are likely to be city broadband users who have been left behind. Even in big cities with profitable markets, people have been left behind. For example, San Francisco has approximately 100,000 people per the city’s own internal analysis that lack broadband (most of whom are low-income and predominantly people of color), yet are surrounded by Comcast and AT&T fiber deployments in that same city. The same is true in various other major cities per numerous studies, which is why EFF has called for a ban on digital redlining both at the state and federal levels. 

Pages