EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 6 hours 37 min ago

SHOP SAFE Is Another Attempt to Fix Big Tech That Will Mostly Harm Small Players and Consumers

Fri, 09/24/2021 - 10:53pm

Congress is once again trying to fix a very specific problem with a broad solution. We support the SHOP SAFE Act’s underlying goal of protecting consumers from unsafe and defective counterfeit products.  The problem is that SHOP SAFE tackles the issue in a way that would make it incredibly difficult for small businesses and individuals to sell anything online. It will do little to stop sophisticated counterfeiters and will ultimately do consumers more harm than good, by obstructing competition and hindering consumers’ ability to resell their own used goods.

Think about trying to sell something used online. Think about having a wool sweater that’s still in great condition but just doesn’t make sense for you anymore. Maybe you moved from Denver to Miami. So, as many of us do these days, you list your sweater online. You put it on eBay or Facebook Marketplace. Or a friend says they know someone who wants it and puts you in touch via email. You exchange the sweater for some cash, and everyone’s happy.

Now imagine that before you can make that sale, you have to send eBay (or Facebook, or your email provider) a copy of your government ID. And verify that you took “reasonable steps,” whatever that means, to make sure the sweater isn’t a counterfeit. And state in your listing where the sweater was made, or if you don’t know, tell the platform all the steps you took to try and figure that out. And carefully word your listing to avoid anything that might get it caught in an automated trademark filter. At this point, you might reasonably decide to just chuck the sweater in the trash rather than jump through all these hoops.

That’s the regime SHOP SAFE threatens to create.

SHOP SAFE Is Bad for the Little Guy

It’s easy, conceptually, to collapse the world of online selling to just Amazon. But that isn’t the reality. Laws written with only Amazon in mind will solidify Amazon’s dominance by imposing burdens that are onerous for small players to meet. And while the requirements of the bill are clearly geared towards large marketplaces like Amazon, the universe of platforms it would apply to is much broader. The current bill language could be interpreted to cover anything from Craigslist to Gmail—basically any online service that can play a role in advertising, selling, or delivering goods. This isn’t just some reach reading that we came up with; at least two anti-counterfeiting organizations supporting SHOP SAFE have urged Congress to make sure it applies even to Facebook Messenger and WhatsApp.

SHOP SAFE would make all of these platforms liable for counterfeiting by their users unless they take certain measures. Technically the bill only creates liability for counterfeiting of products that “implicate health and safety,” but the definition of that term is so broad it could be read to cover just about anything. For example, it could arguably cover your wool sweater because some people have wool allergies. Sure, you could make a case that the definition should be read more narrowly. But platforms don’t want to end up in the position of needing to make that case, so you can bet their legal departments will err on the safe side.

One measure platforms would have to take under SHOP SAFE is verifying the identity, address, and contact information of any third-party seller who uses their services. Imagine if you had to provide a copy of your driver’s license to Craigslist just to advertise your garage sale or sell a used bike. As over the top as that seems, it’s even worse when you think about how this would apply to services like Gmail or Facebook. Should you really have to provide ID to open an email account, just in case you sell something using it? Requirements like this threaten not only competition but user privacy, too.

Other provisions of SHOP SAFE put the burden of rooting out counterfeits on platforms, rather than on the trademark holders who are in the best position to know a real from a fake. Most concerning to us is the requirement that platforms implement “proactive technological measures” for pre-screening listings.  This provision echoes calls for mandatory automated content filtering in the copyright context. We’ve written extensively about the problems with filtering mandates, including filters’ inability to tell infringing from noninfringing uses and their prohibitive cost to all but the largest platforms.  The same concerns apply here.  For example, listings for genuine used goods could easily be caught by a filtering system, as could any listing that compares one product to another or identifies compatible products. Plus many trademarks consist entirely of one or two dictionary words—meaning any filtering technology could easily block listings as suspicious just because the product description included words that happen to be someone’s trademark.

SHOP SAFE requires platforms to implement all of these measures “at no cost” to trademark holders. So either those costs will be passed on to the third-party sellers or absorbed by a platform that has money to burn. For smaller platforms that serve small businesses or individual sellers, either option would be untenable. If these platforms can’t survive, that means fewer choices for consumers.

Is SHOP SAFE a DMCA for Trademarks? No, It’s Worse.

In discussions of SHOP SAFE, some have compared it to the DMCA’s notice-and-takedown regime for addressing online copyright infringement.  SHOP SAFE does share some features with the DMCA. Like the DMCA, SHOP SAFE would give rightsholders leverage to get content taken off the internet based only on their say-so. It also requires platforms to suspend—and then ban—sellers who have been “reasonably determined” to have repeatedly used a counterfeit mark. That doesn’t necessarily mean a court finding, just a determination by a platform. In the DMCA context, the fear of losing an account has been a powerful deterrent to asserting rights based on fair use or other defenses.

But SHOP SAFE’s requirements go far beyond the DMCA’s, while lacking safeguards like a counternotice procedure and penalties for bad-faith takedowns. SHOP SAFE also takes the DMCA’s safe harbor structure and flips it upside down. The DMCA incentivizes platforms to adopt certain policies and practices by providing a true safe harbor—that is, platforms that choose to satisfy the safe harbor requirements can be confident that they cannot be held liable for infringement by their users. SHOP SAFE doesn’t work this way.  Instead, it creates a new, independent basis for secondary infringement liability, and it directs that all covered platforms must implement a range of practices or else be held liable for any trademark infringement by their users.  The DMCA’s safe harbor framework is preferable because it incentivizes desired behavior while maintaining flexibility for different approaches by different platforms according to their unique characteristics.

We do want to protect consumers, but this isn’t the way to do it. Laws for holding marketplaces like Amazon accountable when consumers get hurt already exist. SHOP SAFE is an imprecise, destructive approach to preventing sales of dangerous products, and there’s little reason to think the benefits would outweigh the costs to competition and consumer choice.  Let’s not hurt consumers with a law that’s supposed to help them.

Digital Rights Updates with EFFector 33.6

Thu, 09/23/2021 - 1:31pm

Want the latest news on your digital rights? Then you’ve come to the right place! Version 33, issue 6 of EFFector, our monthly-ish newsletter, is out now! Catch up on the latest EFF news, from our protests at Apple stores to celebrating that HTTPS is actually everywhere, by reading our newsletter or listening to the new audio version below.

LISTEN ON The Internet Archive

EFFECTOR 33.06 - Why EFF Flew A Plane Over Apple's Headquarters

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Colorado Supreme Court Rules Three Months of Warrantless Video Surveillance Violates the Constitution

Wed, 09/22/2021 - 4:52pm

EFF Legal Intern Hannah Donahue co-wrote this post.

Last week, the Colorado Supreme Court ruled, in a case called People v. Tafoya, that three months of warrantless continuous video surveillance outside a home by the police violated the Fourth Amendment. We, along with the ACLU and the ACLU of Colorado, filed an amicus brief in the case.

The police, after receiving a tip about possible drug activity, attached a camera to a utility pole across from Rafael Tafoya’s home that captured views of his front yard, driveway, and back yard. The back yard and part of the driveway were enclosed within a six-foot high privacy fence, which obscured their view from passersby. However, the fence did not block the view from the high vantage of the utility pole. The police could observe a live video feed of the area and could remotely pan, tilt, and zoom the camera. They also stored the footage indefinitely, making it available for later review at any time.

At trial, Tafoya moved to suppress all evidence resulting from the warrantless pole camera surveillance, arguing that it violated the Fourth Amendment. The trial court denied the motion, and Tafoya was convicted on drug trafficking charges. A division of the court of appeals reversed, agreeing with Tafoya that the surveillance was unconstitutional.

Last week, Colorado’s Supreme Court upheld the court of appeals opinion, finding the continuous, long-term video surveillance violated Tafoya’s reasonable expectation of privacy. Citing to United States v. Jones and Carpenter v. United States, the court stated: “Put simply, the duration, continuity, and nature of surveillance matter when considering all the facts and circumstances in a particular case.” The court held that 24/7 surveillance for more than three months represented a level of intrusiveness that “a reasonable person would not have anticipated.”

This ruling is in line with a recent opinion from the Massachusetts Supreme Judicial Court in another case involving long-term pole camera surveillance: Commonwealth v. Mora. In that case, the state’s highest court held that the surveillance violated Massachusetts’ state constitutional equivalent to the Fourth Amendment. The Mora court recognized that advances in law enforcement officers’ ability to monitor spaces exposed to public view should not necessarily diminish peoples’ subjective expectations of privacy. Like Tafoya, the court held that the extended duration and continuous nature of the surveillance mattered. Even where people “subjectively may lack an expectation of privacy in some discrete actions they undertake in unshielded areas around their homes, they do not expect that every such action will be observed and perfectly preserved for the future.” We filed an amicus brief in Mora, as well as an earlier federal district court case from Washington state, United States v. Vargas, that preceded Carpenter but held similarly.

However, several other courts have held that pole camera surveillance—even for periods of time much longer than three months—is constitutionally acceptable. For example, the Seventh Circuit held recently in United States v. Tuggle that police use of a pole camera to surveil a defendant’s home for 18 months did not violate the Fourth Amendment because the surveilled area was fully exposed to public view. The Tuggle court expressed serious reservations, though, about what its decision could mean for the trajectory of government surveillance technologies. Similarly, a panel of the First Circuit in United States v. Moore-Bush overturned a district court’s decision holding that eight months of warrantless pole camera surveillance violated the Fourth Amendment. The First Circuit granted en banc review of the panel decision, and we are currently waiting for the court to issue its opinion.

One concern with the Colorado Supreme Court’s ruling in Tafoya, is its extensive focus on the fact that Tafoya maintained a six-foot privacy fence around his backyard and driveway as evidence of his subjective expectation of privacy. We argued in our amicus brief that the presence of such physical barriers should not be a determining factor because this standard would disproportionately harm people of lesser means. Basing a person’s expectation of privacy on their ability to obscure their property from view would mean that only those who live in wealthy communities—where they can build a fence, where their properties are set back far enough from poles, or where utility lines are buried underground—would be protected from pole camera surveillance. People who cannot afford to build privacy fences or who are not allowed to do so (those who rent or who live in multi-unit residential buildings, for example) would be disproportionately and negatively impacted by such a rule.

We also voiced these concerns in the amicus briefs we filed in Mora and Moore-Bush. The Mora court acknowledged these concerns, explaining that “a resource-dependent approach” undermines protections against warrantless searches by requiring people to “erect physical barriers around their residences before invoking the protections of the Fourth Amendment.” The court stated it would “not undermine [] long-held egalitarian principles” that constitutional rights should apply equally to even the poorest people.

We are following this issue closely and will continue to argue that warrantless residential pole-camera surveillance violates the Fourth Amendment and disproportionately harms disadvantaged communities.

Related Cases: US v. JonesCarpenter v. United States

Stop Military Surveillance Drones from Coming Home

Tue, 09/21/2021 - 5:49pm

A federal statute authorizes the Pentagon to transfer surveillance technology, among other military equipment, to state and local police. This threatens privacy, free speech, and racial justice.

So Congress should do the right thing and enact Representative Ayanna Pressley’s amendment, Moratorium on Transfer of Controlled Property to Enforcement Agencies, to H.R. 4350, the National Defense Authorization Act for Fiscal Year 2022 (NDAA22). It would greatly curtail the amount of dangerous military equipment, including surveillance drones, that could be transferred to local and state law enforcement agencies through the Department of Defense’s “1033 program.” It has already placed $7.4 billion in military equipment with police departments since 1990. 

The program includes both “controlled” property, such as weapons and vehicles, and “uncontrolled” property, such as first aid kits and tents. Pressley’s amendment would prevent the transfer of all “controlled” property, which includes “unmanned aerial vehicles,” or drones. It also includes: Manned aircraft, Wheeled armored vehicles, Command and control vehicles, specialized firearms and ammunition under .50 caliber, Breaching apparatus, and Riot batons and shields. 

Even without the Department of Defense landing drones into our communities, police use of these autonomous flying robots is rapidly expanding. Some police departments are so eager to get their hands on drones that they’ve claimed they need them to help fight COVID-19. The Chicago Police Department even launched a massive drone program using only off-the-books money taken through civil asset forfeiture.

 We know what will happen if police get their hands on more and more military surveillance drones. Technology given out on the condition that it can only be used in “extreme” circumstances often ends up being used in everyday acts of over-policing. And police have already used drones to monitor how people exercise their First Amendment-protected rights. 

After the New York City Police Department accused one activist, Derrick Ingram, of injuring an officer’s ears by speaking too loudly through his megaphone at a protest, police flew drones by his apartment window—a clear act of intimidation for activists and protestors. The government also flew surveillance drones over multiple protests against police racism and violence during the summer of 2020. When police fly drones over a crowd of protestors, they chill free speech and political expression through fear of reprisal and retribution from police. Police could easily apply face surveillance technology to footage collected by a surveillance drone that passed over a crowd, creating a preliminary list of everyone that attended that day’s protest. 

With the United States ending its multi-decade occupation of Afghanistan, military equipment once used in warfare is now inching closer to re-deployment onto U.S. streets. The scaling back of military involvement in Iraq coincided with a massive influx of weapons, armed vehicles, and other Department of Defense surplus being fed directly into police departments. We must prevent a repeat of history. 

In 2015, after public reaction against militarized police in Ferguson, Missouri, President Obama made a few reforms to the 1033 program.  Specifically, he banned transfer to the homefront of armored vehicles, weaponized aircraft and vehicles, weapons of over a specific caliber, grenade launchers, and bayonets. But this did not go far enough to ensure that section 1033 will not contribute to the mass surveillance of people on U.S. soil. 

We’re calling on the public and members of Congress to support Ayanna Pressley’s amendment, the Moratorium on Transfer of Controlled Property to Enforcement Agencies, to H.R. 4350.

HTTPS Is Actually Everywhere

Tue, 09/21/2021 - 2:37pm

For more than 10 years, EFF’s HTTPS Everywhere browser extension has provided a much-needed service to users: encrypting their browser communications with websites and making sure they benefit from the protection of HTTPS wherever possible. Since we started offering HTTPS Everywhere, the battle to encrypt the web has made leaps and bounds: what was once a challenging technical argument is now a mainstream standard offered on most web pages. Now HTTPS is truly just about everywhere, thanks to the work of organizations like Let’s Encrypt. We’re proud of EFF’s own Certbot tool, which is Let’s Encrypt’s software complement that helps web administrators automate HTTPS for free.

The goal of HTTPS Everywhere was always to become redundant. That would mean we’d achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode.

With these simple settings available, EFF is preparing to deprecate the HTTPS Everywhere web extension as we look to new frontiers of secure protocols like SSL/TLS. After the end of this year, the extension will be in “maintenance mode.” for 2022. We know many different kinds of users have this tool installed, and want to give our partners and users the needed time to transition. We will continue to inform users that there are native HTTPS-only browser options before the extension is fully sunset.

Some browsers like Brave have for years used HTTPS redirects provided by HTTPS Everywhere’s Ruleset list. But even with innovative browsers raising the bar for user privacy and security, other browsers like Chrome still hold a considerable share of the browser market. The addition of a native setting to turn on HTTPS in these browsers impacts millions of people.

Follow the steps below to turn on these native HTTPS-only features in Firefox, Chrome, Edge, and Safari and celebrate with us that HTTPS is truly everywhere for users.

Firefox

The steps below apply to Firefox desktop. HTTPS-only for mobile is currently only available in Firefox Developer mode, which advanced users can enable in about:config. 

Preferences > Privacy & Security > Scroll to Bottom > Enable HTTPS-Only Mode

Chrome

HTTPS-only in Chrome is available for both desktop and mobile in Chrome 94 (released today!).

Settings > Privacy and security > Security > Scroll to bottom > Toggle “Always use secure connections”

Edge

This is still considered an “experimental feature” in Edge, but is available in Edge 92.

  1. Visit edge://flags/#edge-automatic-https and enable Automatic HTTPS
  2. Hit the “Restart” button that appears to restart Microsoft Edge.

Visit edge://settings/privacy, scroll down, and turn on “Automatically switch to more secure connections with Automatic HTTPS”.

Safari

HTTPS is upgraded by default when possible in Safari 15, recently released September 20th, for macOS Big Sur and macOS Catalina devices. No setting changes are needed from the user.

Updates for Safari 15

Why EFF Flew a Plane Over Apple's Headquarters

Tue, 09/21/2021 - 12:23pm

For the last month, civil liberties and human rights organizations, researchers, and customers have demanded that Apple cancel its plan to install photo-scanning software onto devices. This software poses an enormous danger to privacy and security. Apple has heard the message, and announced that it would delay the system while consulting with various groups about its impact. But in order to trust Apple again, we need the company to commit to canceling this mass surveillance system.

The delay may well be a diversionary tactic. Every September, Apple holds one of its big product announcement events, where Apple executives detail the new devices and features coming out. Apple likely didn’t want concerns about the phone-scanning features to steal the spotlight. 

But we can’t let Apple’s disastrous phone-scanning idea fade into the background, only to be announced with minimal changes down the road. To make sure Apple is listening to our concerns, EFF turned to an old-school messaging system: aerial advertising.  

EFF banner flies over Apple Park, the corporate headquarters of Apple, located in Cupertino, California

EFF banner flies over the previous Apple headquarters

During Apple’s event, a plane circled the company’s headquarters carrying an impossible-to-miss message: Apple, don’t scan our phones! The evening before Apple’s event, protestors also rallied nationwide in front of Apple stores. The company needs to hear us, and not just dismiss the serious problems with its scanning plan. A delay is not a cancellation, and the company has also been dismissive of some concerns, referring to them as “confusion” about the new features.

Privacy Is Not For Sale

Apple’s iMessage is one of the preeminent end-to-end encrypted chat clients. End-to-end encryption is what allows users to exchange messages without having them intercepted and read by repressive governments, corporations, and other bad actors. We don’t support encryption for its own sake: we fight for it because encryption is one of the most powerful tools individuals have for maintaining their digital privacy and security in an increasingly insecure world.

Now that Apple’s September event is over, Apple must reach out to groups that have criticized it and seek a wider range of suggestions on how to deal with difficult problems, like protecting children online. EFF, for its part, will be holding an event with various groups that work in this space to share research and concerns that Apple and other tech companies should find useful. While Apple tends to announce big features without warning, that practice is a dangerous one when it comes to making sweeping changes to technology as essential as secure messaging. 

The world, thankfully, has moved towards encrypted communications over the last two decades, not away from them, and that’s a good thing. If Apple wants to maintain its reputation as a pro-privacy company, it must continue to choose real end-to-end encryption over government demands to read user’s communication. Privacy matters now more than ever. It will continue to be a selling point and a distinguishing feature of some products and companies. For now, it’s an open question whether Apple will continue to be one of them. 

Further Reading:

How California’s Broadband Infrastructure Law Promotes Local Choice

Fri, 09/17/2021 - 1:46pm

The legislative session has ended and Governor Newsom is expected to sign into law S.B. 4 and A.B. 14. These bills stand as the final pieces of the state’s new broadband infrastructure program. With a now-estimated $7.5 billion assembled between federal and state funds, California has the resources it needs to largely close the digital divide in the coming years. This program allows local cities and counties to access infrastructure dollars to solve problems in their own communities along with empowering local private entities, rather than depend on large, private multi-nationals who aren’t willing to make the needed generational investment into infrastructure in most areas of the state.

EFF will explain below why local communities need to take charge, and how the new law will facilitate local choice in broadband. No state has taken this approach yet and departed from the old model of handing over all the subsidies to giant corporations. That’s why it’s important for Californians to understand the opportunity before them now.

Why it Has to be a Local Public, Private, or Public/Private Entity

If the bankruptcy of Frontier Communications has taught us anything, it is the following two lessons. First, large national private ISPs will forgo 21st-century fiber infrastructure in as many places they can to pad their short-term profits. Government subsidies to build in different areas do not change this behavior. Second, the future of broadband access depends on the placement of fiber optic wires. Fiber is an investment in long-term value over short-term profits. EFF’s technical analysis has also laid out why fiber optics is future-proof infrastructure by showing that no other transmission medium for broadband even comes close, which makes its deployment essential for a long-term solution.

AT&T and cable companies, such as Comcast and Charter, are going to try to take advantage of this program by making offers that sound nice. But they will leverage existing legacy infrastructure that is rapidly approaching obsolescence. While they may be able to offer connectivity that’s “good enough for today” at a cheaper price than delivering fiber, there is no future in those older connections. It’s clear that higher uploads are becoming the norm, and at ever-increasing speeds. As California’s tech sector begins to embrace distributed work, only communities with 21st-century fiber broadband access will be viable places for those workers to live. Fiber optics’ benefits are clear. The challenge of  fiber optics is that its high upfront construction costs require very long-term financing models to deliver on its promise. Here is how the state’s new program makes that financing possible.

A Breakdown of the New Broadband Infrastructure Program

The infrastructure law has four mechanisms in place to help finance and plan new, local options: a grant program for the unserved; long-term financing designed around public, non-profit, and tribal entities; a state-run middle-mile program; and a state technical assistance program. Let’s get into the weeds on each of them.

Broadband Infrastructure Grant Account – The state is making more than $2 billion (and possibly up to $3.5 billion) available in grants, over the coming years, to finance (at 100% of the state’s cost) the construction of broadband networks in areas that need them. To qualify, such areas must lack the following three traits, premised on federal and state mapping data:

  • Broadband service at speeds of at least 25 mbps downstream and 3 mbps upstream (this is mostly folks reliant on DSL copper access or less)
  • Latency that is sufficiently low to allow real-time interactive applications
  • Is not currently receiving money from, and is carrying out the objectives of, the Rural Digital Opportunity Fund

To focus the grant funds,  priority is placed on areas that do not even have 10 mbps downstream and 1 mbps upstream—this is mostly areas that only have satellite internet. This program is focused on having the state paying the construction costs for people who have no internet access at all, as opposed to those with slow, useless, or inadequate access.

Loan Loss Reserve Fund – The State Treasury will establish this fund to enable long-term financing by cities, counties, community service districts, public utilities, municipal utility districts, joint powers authority, local educational agencies, tribal governments, electrical cooperatives, and non-profits. It will be designed to help these entities obtain very low interest rates with low debt obligations. Think of this program like our mortgage-lending system.  30-year fixed mortgages enable many people to purchase homes, even if they could never gather the cash necessary to make the purchase all at once. Fiber is well-suited for this type of financing vehicle; it will be able to deliver speeds useful for multiple decades and carries lower maintenance costs than other broadband options.

State Open-Access Middle-Mile – The state of California, overseen by the Department of Technology, will deploy fiber infrastructure on an open-access basis—meaning on non-discriminatory terms and accessible by ISPs— with an emphasis on developing rural exchange points. The goal behind this infrastructure is to deliver multi-gigabit capacity to areas building broadband access, and also to bring down the cost to affordable rates for obtaining backhaul capacity to the global internet. To use an analogy, the state is building the highways to connect communities to the airport—and the world. The option to connect to these internet highways will be made available to all comers. So, for example, small local businesses or local townships can connect a fiber line to these facilities to build a local broadband network.

Technical Assistance by the State – Fiber infrastructure is a game-changer on the ground. Echoing the way the federal government advised local governments and communities on the deployment a similarly revolutionary technology—electricity— the new broadband infrastructure law deputizes the California Public Utilities Commission to provide technical assistance for these plans. The CPUC will provide local governments and providers with assistance for grant applications to other federal programs and participate in the development of infrastructure plans with county governments.

How all These Programs Work Together and End the Reliance on AT&T and Comcast

Any small business, local government, or even a school district will soon have these tools to solve their own problems. As they look to use the programs listed above, it’s important for any local player seeking to build their own broadband solution to understand it will take multi-year effort to do it right. The loan-loss reserve program will focus on multi-decade repayment plans. This gives eligible entities access to billions of loan dollars for future-proof fiber infrastructure. The grants are meant to eliminate the construction burden of delivering access to the most difficult-to-serve populations in pockets throughout the state. But, any real effort to build a network will have to include their underserved neighbors. For those communities, the state will attempt to deliver the best-priced access to bandwidth capacity through its middle-mile program. Doing so will help keep prices as low as is feasible to enable the delivery of cheap, fast internet in areas that otherwise would never have seen access.

And for any of this to happen, every community needs someone at the local level who is well-versed in how to use the state’s program. That’s where the technical assistance by the state comes in, to help locals navigate the hardest parts of developing a local broadband solution.

Still, no state program can make folks on the ground do the work. That’s why we need people engaged in their communities. If you are tired of relying on big providers that prioritize Wall Street investors over your local community’s needs and are motivated to figure out a solution at home, this is your moment.  This new law not only had you in mind, it’s counting on you to step up to the plate.

No, Tech Monopolies Don’t Serve National Security

Thu, 09/16/2021 - 5:47pm

In what appears to be a “throw spaghetti on the wall approach” to stopping antitrust reform targeting Big Tech, a few Members of Congress and a range of former military and intelligence officials wrote a letter asserting that these companies need to be protected for national security. It’s a spurious argument that seeks to leverage fear of China to prevent changes desperately needed for consumer choice and innovation.

The argument they make is that gigantic tech companies are the only ones who can innovate and compete with China. But this completely misses the point on innovation. When companies have monopolies, they have no reason to innovate since they have captured the market. There is no need to compete to have the best product when you are the only product. Innovation depends on the best ideas from everyone being put forth to the public.   

Now, we don’t know if these folks actually believe in the argument or if they think the rest of us will believe in the argument because they say it, but this letter is really only about delaying legislative antitrust action through raising not just fictional concerns, but completely bogus takes on how innovation happens on the internet.

This Has Been Tried Before, and It Didn’t Work Then

The irony about the national security argument is that it takes a page straight out of the AT&T monopoly playbook and history. Forty years ago, AT&T was the largest corporation in the world and was facing antitrust action both in Congress and the courts. In a Hail Mary effort to get the Department of Justice to abandon its lawsuit, AT&T lobbyists went to the Department of Defense and convinced them that a monopoly communications network was essential for national security.

Source: New York Times Archive found here https://www.nytimes.com/1981/04/09/business/weinberger-defends-at-t.html

The plan was to convince then-President Ronald Reagan that he should directly order the Department of Justice to end the case, despite nearly six years of court hearings detailing how AT&T leveraged its monopoly power. In fact, a year prior to the Department of Defense weighing in opposition to further antitrust action, a federal jury had already awarded MCI $1.8 billion in antitrust damages against AT&T.

The situation with Big Tech is similar to the AT&T monopoly of the past facing antitrust actions on various fronts and like AT&T is attempting to change the narrative and come up with any excuse to avoid the right outcome, which is opening up the tech industry to competition.

Innovation Does Not Come From Big Tech; It Gets Bought by Them

The signers of the letter adopt the view that massive consolidation of the industry is necessary for innovation. But the exact opposite is true. Due to the size of these companies and their targeted acquisitions, innovation is either unnecessary or simply bought up. Startups with new ideas aren’t being launched to make something that competes with Google, Facebook, Apple, and Amazon’s services or products because the lion share of investor money has gone towards creating products that Big Tech will pay lots of money to acquire.

Congressional investigations identified this “kill zone” as the area of tech products and services that orbit the dominant platforms' products, such as search in the case of Google or social media in the case of Facebook. In fact, one would be hard-pressed to find a new organic product from Big Tech that didn’t find its origins in buying another company.

After a lengthy investigation by the House Judiciary Committee and Senate hearings into the merger practices of these companies with a wide array of experts and industry players, the congressional record is full of evidence to demonstrate that the size of Big Tech is, in fact, suppressing competition that sparks innovation. Think about how the tech industry used to be a place where previous giants were replaced regularly with the next best thing that initially started as a garage startup. EFF calls this the life cycle of competition, and it has been fading from the tech industry due to where things are now. This is why EFF strongly supports bills such as the ACCESS Act and the Open App Markets Act because they would open up dominant platforms to new entrants and help empower smaller players to innovate without interference again.

It comes as no surprise that 79% of Americans view Big Tech mergers as anti-competitive because the public isn’t fooled. These companies aren’t huge because it gives them some sort of cutting edge; they are huge because it conveys dominance, control, and monopoly profits. The public understands this, but, clearly, some Members of Congress are not getting it.

What’s Up with WhatsApp Encrypted Backups

Thu, 09/16/2021 - 2:11pm

WhatsApp is rolling out an option for users to encrypt their message backups, and that is a big win for user privacy and security. The new feature is expected to be available for both iOS and Android “in the coming weeks.” EFF has pointed out unencrypted backups as a huge weakness for WhatsApp and for any messenger that claims to offer end-to-end encryption, and we applaud this improvement. Next, encryption for backups should become the default for all users, not just an option.

Currently, users can choose to periodically back up their WhatsApp message history on iCloud (for iOS phones) or Google Drive (for Android phones), or to never back them up at all. Backing up your messages means that you can still access them if, for example, your phone is lost or destroyed. 

WhatsApp does not have access to these backups, but backup service providers Apple and Google sure do. Unencrypted backups are vulnerable to government requests, third-party hacking, and disclosure by Apple or Google employees. That’s why EFF has consistently recommended that users not back up their messages to the cloud, and further that you encourage your friends and contacts to skip it too. Backing up secure messenger conversations to the cloud unencrypted (or encrypted in a way that allows the company running the backup to access message contents) means exposing the plaintext to third parties, and introduces a significant hole in the protection the messenger can offer.

When encrypted WhatsApp backups arrive, that will change. With fully encrypted backups, Apple and Google will no longer be able to access backed up WhatsApp content. Instead, WhatsApp backups will be encrypted with a very long (64-digit) encryption key generated on the user’s device. Users in need of a high level of security can directly save this key in their preferred password manager. All others can rely on WhatsApp’s recovery system, which will store the encryption key in a way that WhatsApp cannot access, protected by a password of the user’s choosing

This privacy win from Facebook-owned WhatsApp is striking in its contrast to Apple, which has been under fire recently for its plans for on-device scanning of photos that minors send on Messages, as well as of every photo that any Apple user uploads to iCloud. While Apple has paused to consider more feedback on its plans, there’s still no sign that they will include fixing one of its longstanding privacy pitfalls: no effective encryption across iCloud backups. WhatsApp is raising the bar, and Apple and others should follow suit.

The Catalog of Carceral Surveillance: Patents Aren't Products (Yet)

Wed, 09/15/2021 - 8:22pm

In EFF’s Catalog of Carceral Surveillance, we explore patents filed by or awarded to prison communication technology companies Securus and Global Tel*Link in the past five years. The dystopian technology the patents describe are exploitative and dehumanizing. And if the companies transformed their patents into real products, the technology would pose extreme threats to incarcerated people and their loved ones.

But importantly, patents often precede the actual development or deployment of a technology. Though applications may demonstrate an interest in advancing a particular technology, these intentions don’t always progress beyond the proposal, and many inventions that are described in patent applications don't wind up being built. What we can glean from a patent application is that the company is thinking about the technology and that it might be coming down the pipeline.

In 2019, Platinum Equity, the firm that has owned Securus Technologies since 2017, restructured the company, placing it under the parent company Aventiv. Aventiv claimed it would lead Securus through a transformation process that includes greater respect for human rights. According to Aventiv, many of patents filed prior to 2019 will remain just ideas, never to be built. Following the publication of our initial Catalog of Carceral Surveillance posts, Aventiv responded with the following statement: "We at Aventiv are committed to protecting the civil liberties of all those who use our products. As a technology provider, we continuously seek to improve and to create new solutions to keep our communities safe.”

Aventiv’s statement goes on to respond to EFF’s post describing a patent filed by Securus that envisions a system for monitoring online purchases made by incarcerated people and their families. The company wrote: “The patent is not currently in development as it was an idea versus a product we will pursue,” and added that to “ensure there is no additional misunderstanding, we will be abandoning this patent and reviewing all open patents to certify that they align with our transformation efforts.”

Aventiv’s statement disclaiming the patent, however, references a different Securus patent than the one described in EFF’s post. We have followed up with Aventiv for clarification and will update this post when we hear back from the company.

Aventiv stated that the patent “was filed in June 2019, prior to our company publicly announcing a multi-year transformation effort,” and provided a link with more details about their commitments. The statement concluded: “Our organization is focused on better serving justice-involved people by making our products more accessible and affordable, investing in free educational and reentry programming, and taking more opportunities--just like this one--to listen to consumers.” 

GTL declined to comment for this series.

GTL and Securus were once among the greatest opponents of federal regulation of prison phone calls. They’ve claimed to have adjusted their positions. Both announced over the summer that they are supportive of reforms to create more accessible prison communications. Each began to offer inmates free phone calls and free tablets

To better understand the potential (but not certain) futures of these companies, EFF created the  Catalog of Carceral Surveillance to spotlight the patents that could pave the way toward chilling developments in surveillance.

In the coming months, EFF plans to follow up with Aventiv to hold them to their word and will continue to remind prison technology companies of their responsibilities to the families they serve.

View the Catalog of Carceral Surveillance below. New posts will be added daily

 

The Federal Government Just Can’t Get Enough of Your Face

Wed, 09/15/2021 - 6:52pm

There are more federal facial recognition technology (FRT) systems than there are federal agencies using them, according to the U.S. General Accounting Office. Its latest report on current and planned use of FRT by federal agencies reveals that, among the 24 agencies surveyed, there are 27 federal FRT systems. Just three agencies—the U.S. Departments of Homeland Security, Defense, and Justice—use 18 of these systems for, as they put it, domestic law enforcement and national security purposes.

But 27 current federal systems are not enough to satisfy these agencies. The DOJ, DHS, and Department of the Interior also accessed FRT systems “owned by 29 states and seven localities for law enforcement purposes.” Federal agencies further accessed eight commercial FRT systems, including four agencies that accessed the infamous Clearview AI. That’s all just current use. Across federal agencies, there are plans in the next two years to develop or purchase 13 more FRT systems, access two more local systems, and enter two more contracts with Clearview AI.

As EFF has pointed out again and again, government use of FRT is anathema to our fundamental freedoms. Law enforcement use of FRT disproportionately impacts people of color, turns us all into perpetual suspects, and increases the likelihood of false arrest. Law enforcement agencies have also used FRT to spy on protestors.

Clearview AI, a commercial facial surveillance entity used by many federal agencies, extracts the faceprints of billions of unsuspecting people, without their consent, and uses them to provide information to law enforcement and federal agencies. They are currently being sued in both Illinois state court and federal court for violating the Illinois Biometric Information Privacy Act (BIPA). Illinois' BIPA requires opt-in consent to obtain someone’s faceprint. Recently, an Illinois state judge allowed the state case to proceed, opening a path for the American Civil Liberties Union (ACLU)  to fight against Clearview AI’s business model, which trades in your privacy for their profit. You can read the opinion of the judge here, and find EFF’s two amicus briefs against Clearview AI here and here

FRT in the hands of the government erodes the rights of the people. Even so, the federal government’s appetite for your face—through one of their 27 systems or commercial systems such as Clearview AI—is insatiable. Regulation is not sufficient here; the only effective solution to this pervasive problem is a ban on the federal use of FRT. Cities across the country from San Francisco, to Minneapolis, to Boston, have already passed strong local ordinances to do so.

Now we must go to Congress. EFF supports Senator Markey’s Facial Recognition and Biometrics Technology Moratorium Act, which would ban the federal government’s use of FRT and some other biometric technologies. Join our campaign and contact your members of Congress  and tell them to support this ban. The government can’t get enough of your face. Tell them they can’t have it.

Take Action

Tell Congress to Ban Federal Use of Face Recognition

You can find the GAO’s Report here

Texas’ Social Media Law is Not the Solution to Censorship

Wed, 09/15/2021 - 2:13pm

The big-name social media companies have all done a rather atrocious job of moderating user speech on their platforms. However, much like Florida's similarly unconstitutional attempt to address the issue (S.B. 7072), Texas' recently enacted H.B. 20 would make the matter worse for Texans and everyone else.

Signed into law by Governor Abbott last week, the Texas law prohibits platforms with more than 5 million users nationwide from moderating user posts based on viewpoint or geographic location. However, as we stated in our friend-of-the-court brief in support of NetChoice and the Computer & Communications Industry Associations lawsuit challenging Florida's law (NetChoice v. Moody), "Every court that has considered the issue, dating back to at least 2007, has rightfully found that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate that speech."

Inconsistent and opaque content moderation by online media services is a legitimate problem. It continues to result in the censorship of a range of important speech, often disproportionately impacting people who aren’t elected officials. That's why EFF joined with a cohort of allies in 2018 to draft the Santa Clara Principles on Transparency and Accountability in Content Moderation, offering one model for how platforms can begin voluntarily implementing content moderation practices grounded in a human rights framework. Under the proposed principles, platforms would:

  1. Publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines.
  2. Provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.
  3. Provide a meaningful opportunity for timely appeal of any content removal or account suspension.

H.B. 20 does attempt to mandate some of the transparency measures called for in the Santa Clara Principles. Although these legal mandates might be appropriate as part of a carefully crafted legislative scheme, H.B. 20 is not the result of a reasonable policy debate. Rather it is a retaliatory law aimed at violating the First Amendment rights of online services in a way that will ultimately harm all internet users.

We fully expect that once H.B. 20 is challenged, courts will draw from the wealth of legal precedent and find the law unconstitutional. Perhaps recognizing that H.B. 20 is imperiled for the same reasons as Florida’s law, the Lonestar State this week filed a friend-of-the-court brief in the appeal of a federal court’s ruling that Florida’s law is unconstitutional.

Despite Texas and Florida’s laws being unconstitutional, the concerns regarding social media platforms' control on our public discourse is a critical policy issue. It is vitally important that platforms take action to provide transparency, accountability, and meaningful due process to all impacted speakers and ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of human rights. 

Lessons From History: Afghanistan and the Dangerous Afterlives of Identifying Data

Wed, 09/15/2021 - 1:34pm

As the United States pulled its troops out of Afghanistan after a 20-year occupation, byproducts of the prolonged deployment took on new meaning and represented a new chapter of danger for the Afghan people. For two decades, the United States spearheaded the collection of information on the people of Afghanistan, both for commonplace bureaucratic reasons like payroll and employment data—and in massive databases of biometric material accessible through devices called HIIDE. 

HIIDE, the Handheld Interagency Identity Detection Equipment, are devices used to collect biometric data like fingerprints and iris scans and store that information on large accessible databases. Ostensibly built in order to track terrorists and potential terrorists, the program also was used to verify the identities of contractors and Afghans working with U.S. forces. The military reportedly had an early goal of getting 80% of the population of Afghanistan into the program. With the Taliban retaking control of the nation, reporting about the HIIDE program prompted fears that the equipment could be seized and used to identify and target vulnerable people. 

Some sources, including those who spoke to the MIT Technology Review, claimed that the HIIDE devices offered only limited utility to any future regimes hoping to use them and that the data they access is stored remotely and therefore less of a concern. They did raise alarms, however, on the wide-reaching and detailed Afghan Personnel and Pay System (APPS), used to pay contractors and employees working for the Afghan Ministry of Interior and Ministry of Defense. This database contains detailed information on every member of the Afghan National Army and Afghan National Police—prompting renewed fears that this information could be used to find people who assisted the U.S. military or Afghan state-building, policing, and counter-insurgency measures. 

There has always been concern and protest over how the U.S military used this information, but now that concern takes on new dimensions. This is, unfortunately, a side effect of the collection and retention of data on individuals. No matter how secure you think the data is—and no matter how much you trust the current government to use the information responsibly and benevolently—there is always a risk that either priorities and laws will change, or an entirely new regime will take over and inherit that data. 

One of the most infamous examples was the massive trove of information collected and housed by Prussian and other German police and city governments in the early twentieth century. U.S. observers given tours of the Berlin police filing system were shocked to find dozens of rooms filled with files. In total, over 12 million records were kept containing personal and identifying information for people who had born, lived, or traveled through Berlin since the system began. Although Prussian police were known for political policing and brutal tactics, during the Weimar period between 1918 and 1933, police were lenient and even begrudgingly accepting of LGBTQ+ people at a time when most other countries severely criminalized people with same-sex desires and gender-nonconforming people. 

All of this changed when the Nazis rose to power and seized control of not just the government and economy of a major industrialized nation, but also millions of police files containing detailed information about people, who they were, and where to find them

The history of the world is filled with stories of information—collected responsibly or not, with intended uses that were benevolent or not—having long afterlives. The information governments collect today could fall into more malevolent hands tomorrow. You don't even need to go abroad in search of a government finding new nefarious uses for information collected on individuals for entirely different and benevolent purposes. 

With the afterlives of biometric surveillance and data retention now re-threatening people in Afghanistan, we are now regrettably able to add this chapter to this history of the dangers of mass data collection. Better protections on information and its uses can only go so far. In many instances, the only way to ensure that people are not made vulnerable by the misuse of private information is to limit, wherever possible, how much of it is collected in the first place. 

Surveillance Self-Defense Guides Now Available in Burmese

Wed, 09/15/2021 - 12:58pm

As part of our goal to expand the impact of our digital security guide, Surveillance Self-Defense (SSD), we recently translated the majority of its contents into Burmese. This repository of resources on circumventing surveillance across a variety of different platforms, devices, and threat models is now available in English, and in whole or in part in 11 other languages: Amharic, Arabic, Spanish, French, Russian, Turkish, Vietnamese, Brazilian Portuguese, Burmese, Thai, and Urdu.

The last year has seen significant numbers of protests by the people of Myanmar against human and digital rights violations by the military, prompted by the recent military coup in the country. Fighting back against human rights violations shouldn’t require you to have a computer science degree, and so our SSD guides help explain, in clear language, how to protect yourself from digital surveillance and unpack key concepts that make doing so easier. These guides offer overviews and recommendations for digital security protection during protests, network circumvention, using VPNs and Tor, using Signal, social media safety, and so on. 

We hope these resources will help those in Myanmar access reliable, up-to-date digital security guidance during a high-stress time, localized to the unique considerations in Myanmar. In addition to this project, we also plan to ​​translate our new mobile phone privacy guide into multiple languages, including Turkish, Russian, and Spanish. We’d like to thank the National Democratic Institute for providing funds for these translations, and Localization Lab for their efforts in completing them.

EFF and Allies Urge Council of Europe to Add Strong Human Rights Safeguards Before Final Adoption of Flawed Cross Border Surveillance Treaty

Tue, 09/14/2021 - 3:11pm

EFF, European Digital Rights (EDRi), the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), and other civil society organizations have worked closely on recommendations to strengthen human rights protections in a flawed international cross border police surveillance treaty drafted by the Council of Europe (CoE). At a virtual hearing today before the CoE Parliamentary Assembly (PACE) Committee on Legal Affairs and Human Rights, EFF Policy Director for Global Privacy Katitza Rodriguez presented a summary of the concerns we and our partners have about the treaty’s weak privacy and human rights safeguards.

There is much at stake, as the draft Second Additional Protocol to the Budapest Convention on Cybercrime will reshape cross-border law enforcement data-gathering on a global scale. The Protocol’s objectives are to facilitate cross-border investigations between countries with varying legal systems and standards for accessing people’s personal information. In her testimony, the text of which is published in full below, Rodriguez highlighted key shortcomings in the Protocol, and recommendations for fixing them.

EFF Testimony and Statement to Committee on Legal Affairs and Human Rights, Parliamentary Assembly, Council of Europe

At the highest level, the current Protocol should establish clear and enforceable baseline safeguards in cross-border evidence gathering, but fails to do so. Though new police powers are mandatory, corresponding privacy protections are frequently optional, and the Protocol repeatedly defers to harmonised safeguards in an active attempt to entice states with weaker human rights records to sign on. The result is a net dilution of privacy and human rights on a global scale. But the right to privacy is a universal right. International law enforcement powers should come with detailed legal safeguards for privacy and data protection. When it comes to data protection, Convention 108+ should be the global reference. By its recommendations to the Council of Ministers, PACE has an opportunity to establish a commonly acceptable legal framework for international law enforcement that places privacy and human rights at its core.

Protecting Online Anonymity


Substantively, we have concerns regarding Article 7 of the Protocol, which permits direct access by law enforcement in one country to subscriber identity information held by a company in another country. In our opinion, Article 7 fails to provide, or excludes, critical safeguards contained in many national laws. For example, Article 7 does not include any explicit restrictions on targeting activities which implicate fundamental rights, such as freedom of expression or association, and prevents Parties from requiring foreign police to demonstrate that the subscriber data they seek will advance a criminal investigation.[1]

We are particularly concerned that Article 7’s explanatory text fails to acknowledge that subscriber data can be highly intrusive. Your IP address can tell authorities what websites you visit and what accounts you used. Police can also request the name and address associated with your IP address in order to link your identity to your online activity, and that can be used to learn deeply intimate aspects of your daily habits. Article 7’s identification power undermines online anonymity in a context that embraces legal systems with widely divergent approaches to criminal justice, including some governments that are autocratic in nature. The resulting threat to journalists, human rights defenders, politicians, political dissidents, whistleblowers and others is indefensible.

This is why we've urged PACE to remove Article 7 entirely from the text of the Protocol. States would still be able to access subscriber data in cross-border contexts, but would instead rely on Article 8, which includes more safeguards for human rights. If Article 7 is retained, we’ve urged for additional minimum safeguards, such as:

  • Ensuring that the explanatory text properly acknowledges that access to subscriber data can be highly intrusive.
  • Providing Parties with the option, at least, of requiring prior judicial authorization for requests made under Article 7.
  • Requiring Parties to establish a clear evidentiary basis for Article 7 requests.
  • Ensuring that Article 7 requests provide enough factual background to assesscompliance with human rights standards and protected privileges.
  • Requiring notification or consultation with a responding state for all Article 7 demands.
  • Requiring refusal of Article 7 requests when necessary to address lack of doublcriminality or protection of legal privileges.
  • Providing the ability to reserve Article 7 in a more nuanced and timely manner.
  • Ensuring that Article 7 demands include details regarding legal remedies and obligations for service provider refusal.
Raising the Bar for Data Protection


When it comes to Article 14’s data protection safeguards, we have asked that the Protocol be amended so that signatories may refuse to apply its most intrusive powers (Articles 6, 7 and 12) when dealing with any other signatory that has not also ratified Convention 108+. We also hope the Parliamentary Assembly will support the Committee of Convention 108’s mission, and remember (or take note) that the Committee of Ministers supports making Convention 108 the global reference for data protection, including in the implementation of this Protocol.

Article 14 itself falls short of modern data protection requirements and, in some contexts, will actively undermine emerging international standards. Two examples:

  • Fails to require independent oversight of law enforcement investigative activities. For example, many oversight functions can be exercised by government officials housed within the same agencies directing the investigations;
  • Article 14 limits the situations in which biometric data can be considered ‘sensitive and in need of additional protection despite a growing international consensus that biometric data is categorically sensitive.

But even with the weak standards contained in Article 14, signatories are explicitly permitted to bypass these safeguards through various mechanisms, none of which provide any assurance that meaningful privacy protections will be in place. For example, any two or more signatories can enter into an international data protection agreement that will supersede the safeguards outlined in Article 14. The agreement does not need to provide a comparable or adequate level of protection to the default rules.

Signatories can even adopt less protective standards in secret agreements or arrangements and continue to rely on the Protocol’s law enforcement powers. We have therefore recommended that the Protocol be amended to ensure a minimum threshold of privacy protection in Article 14, one which may be supplemented with more rigorous protections but cannot be replaced by weaker standards. This would also be done in a vein to avoid the fragmentation of privacy regimes.

Make Joint Investigative Team Limitations Explicit


Under Article 12, signatories can form joint investigative teams that can bypass core existing frameworks such as the MLAT regime when using highly intrusive cross-border investigative techniques or when transferring personal information between team members.

We have asked that the Protocol be amended so that some of its core intended limitations are made explicit. This is particularly important given that many teams may ultimately be operating with a higher level of informality and driven by police officers without input or supervision from other government bodies typically involved in

overseeing cross-border investigations. Specifically, we have asked that the Protocol (or, alternatively, the explanatory text) clearly and unequivocally state that participants in a joint investigative team must not take investigative measures within the territory of another participant in the team and that no participant may violate the laws of another participant of that team.

We also ask that the Protocol be amended so that Parties are obligated to involve their central authorities (and, preferably, the entity responsible for data protection oversight) in the formation and general operation of an investigative team, and that agreements governing investigative teams be made public except to the degree that doing so would threaten investigative secrecy or is necessary to achieve other important public interest objectives.

Read more on this topic:

 

 

 

Protestors Nationwide Rally to Tell Apple: "Don't Break Your Promise!"

Tue, 09/14/2021 - 2:06pm

Yesterday in San Francisco, Chicago, Boston, New York, and other cities across the U.S activists rallied in front of Apple stores demanding that the company fully cancel its plan to introduce surveillance software into its devices. In addition to protests at stores organized by EFF and Fight for the Future, EFF also took the message directly to Apple’s headquarters by flying a banner above the campus during its annual iPhone launch event today. 

The last time EFF held a protest at an Apple store, in 2016, it was to support the company’s strong stance in protecting encryption. That year, Apple challenged the FBI’s request to install a backdoor into its operating system. This year, in early August, Apple stunned its many supporters by announcing a set of upcoming features, intended to help protect children, that would create an infrastructure that could easily be redirected to greater surveillance and censorship. These features would pose enormous danger to iPhone users’ privacy and security, offering authoritarian governments a new mass surveillance system to spy on citizens. 

After public pushback in August, Apple announced earlier this month that its scanning program would be delayed. Protestors this week rallied to urge Apple to abandon its program and commit to protecting user privacy and security. Speakers included EFF Activist Joe Mullin and Executive Director Cindy Cohn.

Mullin told the crowd at the San Francisco protest how essential it was that Apple continue its commitment to protecting users: “From San Francisco to Dubai, Apple told the whole world that iPhone is all about privacy,” said Mullin. “But faced with government pressure, they caved. Now 60,000 users have signed a petition telling Apple they refuse to be betrayed.”

Holding signs that read “Don’t Scan My Phone” and “No Spy Phone,” protestors chanted “No 1984, no, Apple—no backdoor!" and “2-4-6-8, stand with users, not the state; 3-5-7-9, privacy is not a crime!”

“We can't be silent while Tim Cook and other Apple leaders congratulate themselves on their new products after they've signed on to a mass surveillance project,” said Mullin.  “No scanners on our phones!”


Apple has said that it will take additional time over the coming months to collect input about its child protection features. Later this month, EFF ​​hopes to begin that conversation with a public event that will bring together representatives from diverse constituencies who rely on encrypted platforms. Discussion will focus on the ramifications of these decisions, what we would like to see changed about the products, and protective principles for initiatives that aim to police private digital spaces. We hope Apple and other tech companies will join us as well. You can find out more soon about this upcoming event by visiting our events page.

Read further on this topic: 

Geofence Warrants Threaten Civil Liberties and Free Speech Rights in Kenosha and Nationwide

Fri, 09/10/2021 - 12:50pm

In the days following the police shooting of Jacob Blake on August 23, 2020, hundreds of protestors marched in the streets of Kenosha, Wisconsin. Federal law enforcement, it turns out, collected location data on many of those protesters. The Bureau of Alcohol, Tobacco and Firearms (ATF) used a series of “geofence warrants” to force Google to hand over data on people who were in the vicinity of—but potentially as far as a football field away from—property damage incidents. These warrants, which police are increasingly using across the country, threaten the right to protest and violate the Fourth Amendment. 

Geofence warrants require companies to provide information on every electronic device in a geographical area during a given time period. ATF used at least 12 geofence warrants issued to Google—the only company known to provide data in response to these warrants—to collect people’s location data during the Kenosha protests. The center of each geographic area was a suspected arson incident. However, the warrants reach broadly and require location data for long periods of time. One of the warrants encompassed a third of a major public park for a two-hour window during the protests. The ATF effectively threw a surveillance dragnet over many protesters, using “general warrants” that violate the Fourth Amendment and threaten the First Amendment right to protest free from government spying.

Police can use geofence warrants to collect information on and movements of innocent people at protests. This can include device information, account information, email addresses, phone numbers, and information on Google services used by the device owner, and the data can come from both Android and Apple devices. Someone who goes to a protest and happens to be nearby when a crime occurs may get caught up in a police investigation. Police in Minneapolis, for example, used a geofence warrant during the protests over the killing of George Floyd. The public only learned about it because the dragnet, centered around a property damage incident, caught an innocent bystander filming the protests, and Google notified him (which it doesn’t always do). The police can also use this data to create dossiers on activists and organizers.

In this way, geofence warrants also eliminate anonymity that people may rely on in order to protest or otherwise freely associate in public spaces. Law enforcement’s ability to catalogue the location of peaceful protestors will chill their exercise of their First Amendment rights. This is especially problematic when, as with the August 2020 protests in Kenosha, people are taking to the streets to hold the police themselves accountable.

Google recently published data showing that police have issued at least 20,000 warrants, just over the last three years, and the sheer volume of these warrants is increasing exponentially year over year. For example, California issued 209 geofence warrant requests in 2018, but in 2020, it issued nearly 2,000. Each warrant may result in the disclosure of information on tens or hundreds of devices. The vast majority of these warrants are issued by state and local police, which makes them difficult to track.

Google must start standing up for its users against this massive overreach. In addition to serious harms to privacy and free expression, geofence warrants operate without transparency. After years of pressure, Google has finally provided some limited data. But the vast majority of geofence warrants remain sealed, with no information from Google or law enforcement on their targets, geographic area and length of time, and their purported justifications. As a result, most people have no way of knowing whether they are caught up in one of these dragnets. Such uncertainty further chills the constitutional rights to freely protest and associate.

The Other 20-Year Anniversary: Freedom and Surveillance Post-9/11

Fri, 09/10/2021 - 12:47pm

The twentieth anniversary of the attacks of September 11, 2021 are a good time to reflect on the world we’ve built since then. Those attacks caused incalculable heartbreak, anger and fear. But by now it is clear that far too many things that were put into place in the immediate aftermath of the attacks, especially in the areas of surveillance and government secrecy, are deeply problematic for our democracy, privacy and fairness. It’s time to set things right. 

The public centerpiece of our effort to increase government surveillance in response to the attacks was the passage of the Patriot Act, which will have its own 20th anniversary on October 26. But much more happened, and far too much of it was not revealed until years later.  Our government developed  a huge and expensive set of secret spying operations that eviscerated the line between domestic and foreign surveillance and swept up millions of non-suspect Americans' communications and records. With some small but critical exceptions, Congress almost completely abdicated its responsibility to check the power of the Executive. Later, the secret FISA court shifted from merely approving specific warrants to a quasi-agency charged with reviewing entire huge secret programs without either the knowledge or the authority to provide meaningful oversight. All of these are a critical part of the legacy of September 11.

Yet even after all of these years, there’s no clear evidence that you can surveil yourself to safety.

Of course, we did not invent national security or domestic surveillance overreach 20 years ago. Since the creation of the Federal Bureau of Investigation in the early twentieth century, and the creation of the National Security Agency in 1952, the federal government has been reprimanded and reformed for overreaching and violating constitutionally protected rights. Even before 9/11, the NSA’s program FAIRVIEW forged agreements between the government agency and telecom companies in order to monitor phone calls going in and out of the country. But 9/11 gave the NSA the inciting incident it needed to take what it  has long wanted: a shift to a collect-it-all strategy inside the U.S. to match, in many ways, the one it had already developed outside the U.S., and the secret governmental support to try to make it happen. As for those of us in the general public, we were told in the abstract that giving up our privacy would make us more secure even as we were kept in the dark about what that actually meant, especially for the Muslims and other Americans unfairly targeted. 

The surveillance infrastructure forged or augmented in the post-war-on-terror world is largely still with us. In the case of the United States, in addition to the computer servers, giant analysis buildings, weak or wrong legal justifications, and the secret price tag, one of the lasting and more harmful effects has been on the public. Specifically, we are still too often beholden to the mentality that collecting and analyzing enough information can keep a nation safe. Yet even after all of these years, there’s no clear evidence that you can surveil yourself to safety. This is true in general but it’s especially true for international terrorism threats, which have never been numerous or alike enough to be used to train machine learning models, much less make trustworthy predictions. 

But there are copious amounts of evidence of ongoing surveillance metastasis: the intelligence fusion centers, the national security apparatus, the Department of Homeland Security, enhanced border and customs surveillance have been deputized to do things far afield from their original purpose of preventing another foreign terrorist attack. Even without serious transparency, we know that those powers and tools have been used for political policing, surveilling activists and immigrants, denying entry to people because of their political stances on social media, and putting entire border communities under surveillance.

The news in the past 20 years isn’t all bad, though. We have seen the government end many of the specific methods developed and deployed by the NSA immediately after 9/11.  This includes the infamous bulk call details record program (albeit replaced with an only slightly less problematic program). It also includes the NSA’s metadata collection and the “about” searching done under the UPSTREAM program off of the Internet backbone.  We also have cut back on the unlimited gag orders accompanying National Security Letters. Each of these was accomplished through different paths, but none of them exist today as they did immediately after 9/11. We even pushed through some modest reforms of the FISA court.  

But the biggest good news is the growth of encryption across the digital world, from the encrypting of links between the servers of giants like Google, to the Let’s Encrypt project encrypting web traffic, to the rise of end-to-end encrypted tools like Signal and WhatsApp that have given people around the world greater protections against surveillance even as the governments have become more voracious in their appetites for our data. Of course, the fights over encryption continue, but we should note and celebrate our victories when we can. 

Other nefarious programs continue, including the Internet backbone surveillance that EFF has long sought to bring to the public courts in Jewel v. NSA.  And in addition to federal surveillance, we’ve seen the filtering of the “collect it all” mentality manifest in our local police departments both through massive surveillance technology injections and in the slow enmeshing of local with federal surveillance. We still do not have a full public account of the types and scope of surveillance that has been deployed domestically, much less internationally, although EFF is trying to piece some of it together with our Atlas of Surveillance

Twenty years is a good long time.  We now know more of what our government did in the aftermath and we know how little safety most of these programs produced, along with the disproportionate impact it had on some of our most vulnerable communities. It’s time to start applying the clear lessons from that time and continue to uncover, question, and dismantle both the mass surveillance and the unfettered secrecy that were ushered in when we were all afraid. 

Related Cases: Jewel v. NSA

The Catalog of Carceral Surveillance: Voice Recognition and Surveillance

Fri, 09/10/2021 - 11:00am

Prison phone companies have been profiting off the desire for human connection for as long as they’ve been in business. Historically, there’s been one primary instrument for that connection — voice — and only one way to milk it for revenue: by charging exorbitant rates for phone calls. It’s been a profitable business model for both the companies and their partners, the jails and prisons. 

In recent years, though, prison reform advocates and the families of people who are incarcerated, sick of dumping their savings into the maws of these phone providers, have worked to tip this cash cow. They made enough noise that the Federal Communication Commission (FCC) set a cap on per-minute charges on interstate phone calls. 

So two of the largest providers of prison communications have initiated new ways of mining inmates for income.

Prisoners know their calls while in-custody are generally being monitored. Prisoners may also be aware that they’re being recorded (both legally and not so legally). Still, it may shock prisoners that Securus and GTL are working to monetize their ability to eavesdrop on and catalogue thousands of voices traversing the phone lines of penal facilities in nearly every state every day.

In the name of security and fraud prevention, these two prison communications companies have developed ways to store and analyze the trove of voices they’ve recorded. The companies create voice prints of people speaking on a prison’s phone lines. The companies claim that, through multi-modal audio mining, these voice prints can be matched to their databases of voices to identify individuals across phone calls and facilities. These systems are already in place throughout the country, including in Arkansas, Florida, and Texas.

These companies are already notorious for expanding the expense of prison communications  beyond the prison to the support networks and families outside. These companies are now working on expanding biometric surveillance to the greater carceral community too. The companies use the technology to identify and profile anyone who has a voice that crosses into a prison. This includes all the parents, children, lovers, and friends of incarcerated people.

In a patent published in January 2021, Securus described collecting audio samples of individuals’ voices both at the moment of intake and while inmates are communicating with people on the outside. Facilities often acquire voice samples by threatening a loss of privileges should an inmate refuse to bow to the surveillance state. 

“Here’s another part of myself that I had to give away again in this prison system,” one inmate recalled in a 2019 article by The Intercept after he was told that failing to help train the system to recognize his voice would result in a loss of his ability to use the phone. As with other efforts to mass collect biometric and personal information, what happens to the data once it’s been collected and stored, including with whom it’s shared and who has access, is still an open question.  

Securus and GTL have other ideas in the works for possible uses, particularly as these voiceprints can be connected to other databases and people, both in and out of prison. 

Securus would like to see automated background checks based on their voice recognition technology. “[D]etainees with criminal records may be released at the end of a short term stay in a holding tank or may be bonded out without being detected," says Securus.

Securus claims that it will be able to use voiceprint verification to identify “unauthorized” callers based on whether a second voice on one end of the phone line differs from an initial, authorized voice. So, if a prisoner’s girlfriend rings in and passes the phone to a child whose voiceprint weren’t vetted and approved, the phone system can boot the callers from the call altogether. 

Both companies would like to be able to map networks of individuals calling inmates, generating profiles of those who call multiple inmates or who stay in contact with their fellow prisoners once released. 

Global Tel*Link has branded its version as Voice IQ. Securus’s own Investigator Pro claims that “You’ve Never Seen Voice Biometrics Like This.”

With these new patents and initiatives, Securus and Global Tel*Link seek to identify, and almost certainly misidentify, more inmates and their families than ever before, forging new frontiers in the ways America’s prison complex can scrutinize the vulnerable. 

Don’t Stop Now: Join EFF, Fight for the Future at Apple Protests Nationwide

Thu, 09/09/2021 - 6:46pm

We’re winning—but we can’t let up the pressure. Apple has delayed their plan to install dangerous mass surveillance software onto their devices, but we need them to cancel the program entirely. Next week, just before Apple’s big iPhone launch event, we need your help to make sure the company does the right thing. 

Activists from EFF, Fight for the Future, and other digital civil liberties organizations have planned protests around the country for Monday, September 13, at 6PM PT to demand that Apple completely drop its planned surveillance software program. You can find a list of the protests here. Protests are already planned in Boston, Atlanta, Washington D.C., New York City, and Portland (OR).

EFF will host a protest at San Francisco Union Square, with signs, stickers, and speakers, but you can protest no matter where you are:

RSVP NOW

JOIN THE PROTEST AND TELL APPLE: DON'T SCAN OUR PHONES

Whether you’re a longtime fan of Apple’s products or you’ve never used an iPhone in your life, we must hold companies accountable for the promises they make to protect privacy and security. Apple has found its way to making the right choice in the past, and we know they can do it again. 

So bring a friend, wear your EFF merch, and make your voice heard! We’ve got sign designs ready for you to print below, or you can make your own! And you can always add our custom EFF "I do not consent to the search of this device" lock screen to your phone. 

And to make sure that Apple gets the message that encryption is simply too important to give up on, EFF will also be sending it straight to Apple's headquarters—by flying an aerial banner over the campus during their September 14 iPhone launch event.

On September 7, we delivered nearly 60,000 petitions to Apple. Over 90 organizations across the globe have also urged the company not to implement them. We’re pleased Apple is now listening to the concerns of customers, researchers, civil liberties organizations, human rights activists, LGBTQ people, youth representatives, and other groups, about the dangers posed by its phone scanning tools. But we can’t let up the pressure until Apple commits, fully, to protecting privacy and security. 

COVID Protocol: We are committed to upholding public health guidelines related to COVID. Please don't attend if you have any COVID symptoms, and we encourage masking and social distancing.

Below you can find printable images for your protest:

Pages