EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 51 min ago

Improving Enforcement in State Consumer Privacy Laws

Wed, 07/07/2021 - 7:21pm

Momentum for state privacy bills has been growing over the past couple of years, as lawmakers respond to privacy invasions and constituent demand to address them. As several states end their legislative sessions for the year and lawmakers begin to plan for next year, we urge them to pay special attention to strengthening enforcement in state privacy bills.

Strong enforcement sits at the top of EFF’s recommendations for privacy bills for good reason. Unless companies face serious consequences for violating our privacy, they’re unlikely to put our privacy ahead of their profits. We need a way to hold companies directly accountable to the people they harm—especially as they have shown they’re all-too willing to factor fines for privacy violations into the cost of doing business.

To do so, we recommend a full private right of action—that is, making sure people have a right to sue companies that violate their privacy. This is how legislators normally approach privacy laws. Many privacy statutes contain a private right of action, including federal laws on wiretapsstored electronic communicationsvideo rentalsdriver’s licensescredit reporting, and cable subscriptions. So do many other kinds of laws that protect the public, including federal laws on clean wateremployment discrimination, and access to public records. Consumer data privacy should be no different.

Unless companies face serious consequences for violating our privacy, they’re unlikely to put our privacy ahead of their profits.

Yet while private individuals should be able to sue companies that violate their privacy, it is only part of the solution. We also need strong public enforcement, from regulators such as attorneys general, consumer protection bureaus, or data privacy authorities.

We also advocate against what are called “right to cure” provisions. Rights to cure give companies a certain amount of time to fix violations of the law before they face consequences—essentially giving them a get-out-of-jail free card. This unnecessarily weakens regulators’ ability to go after companies. It can also discourage regulators from investing resources and lawyer time into bringing a case that could very easily disappear under these provisions.

Last year, California voters removed the right to cure from the California Consumer Privacy Act. Unfortunately, several other state bills not only refused to include private rights of action to hold companies accountable, but they also hobble their one enforcement lever with rights to cure.

Some Improvements, But We Still Have a Long Way to Go

The Colorado Privacy Act passed very near the end of the state’s legislative session. It covers entities that process the data of more than 100,000 individuals or sell the data of more than 25,000 individuals. EFF did not take a position on this bill, viewing it as a mixed bag overall. It has no private right of action, centering all of its enforcement in the state Attorney General’s office. The bill also has a right to cure.

However, we do applaud the legislature for adding a sunset to that bill’s right to cure—currently set to expire in 2025. Companies argue that rights to cure make it easier to comply with new regulations, which is often persuasive for lawmakers. We are glad to see Colorado recognize this loophole should not last indefinitely. EFF continues to oppose right to cure provisions but is glad to see them limited. We hope to see Colorado build on the basic privacy rights enshrined in this law in future sessions.

We’ve also seen some small progress toward stronger enforcement. Opponents of strong privacy bills often argue that private rights of action, or expanding the private rights of action, is a poison pill for privacy bills. But some legislatures have shown this year that is not true. Nevada improved a consumer privacy bill passed last year, SB 220; that change now permits Nevadans to sue data brokers that violate their privacy rights.

Furthermore, the Florida house voted to pass a bill that contained a full private right of action—a small but significant step forward and a blow against the argument from big tech companies and their legislative enablers that including this important right is a complete non-starter for a privacy bill. Given the recent Supreme Court ruling in the TransUnion case, which places limits on who can sue companies under federal laws, it has never been more important for states to step up and provide these crucial protections for their constituents.

Overall, we would like to see continued momentum around prioritizing strong enforcement—and to see other states move beyond the baselines set in California and Colorado. We certainly should not accept steps backwards. Unfortunately, that is what happened in one state. The data privacy bill passed in Virginia this year is significantly weaker than any other state law in this and other crucial areas. Virginia’s law lacks a private right of action and includes a right to cure. Adding insult to injury, the state also opted to give the law’s sole enforcer, the attorney general’s office, only $400,000 in additional funding to cover its new duties. This anemic effort is wholly inadequate to the task of protecting the privacy of every Virginian. This mistake should not be repeated in other states.

As other states look to pass comprehensive consumer data privacy bills, we urge lawmakers to focus on strong enforcement. There is much work to do. But we are encouraged to see more attention paid to properly funding regulatory bodies, growing support for private rights of action, and limits on rights to cure.

EFF will continue to push for strong privacy laws and demand that these laws have real teeth to value consumer rights over corporate wish lists.  

EFF Gets $300,000 Boost from Craig Newmark Philanthropies to Protect Journalists and Fight Consumer Spyware

Wed, 07/07/2021 - 1:17pm
Donation Will Help with Tools and Training for Newsgatherers, and Research on Technology like Stalkerware and Bossware

San Francisco – The Electronic Frontier Foundation (EFF) is proud to announce its latest grant from Craig Newmark Philanthropies: $300,000 to help protect journalists and fight consumer spyware.

“This donation will help us to develop tools and training for both working journalists and student journalists, preparing them to protect themselves and their sources. We also help journalists learn to research the ways in which local law enforcement is using surveillance tech in its day-to-day work so that, ultimately, communities can better exercise local control,” said EFF Cybersecurity Director Eva Galperin. “Additionally, EFF is launching a public education campaign about what we are calling ‘disciplinary technologies.’ These are tools that are ostensibly for monitoring work or school performance, or ensuring the safety of a family member. But often they result in non-consensual surveillance and data-gathering, and often disproportionately punish BIPOC.”

A prime example of disciplinary technologies is test-proctoring software. Recently, Dartmouth’s Geisel School of Medicine charged 17 students with cheating after misreading software activity during remote exams. After a media firestorm, the school later dropped all of the charges and apologized. Other disciplinary technologies include employee-monitoring bossware, and consumer spyware that is often used to monitor and control household members or intimate partners. Spyware, often based upon similar technologies, is also regularly used on journalists across the globe.

“We need to make sure that technology works for us, not against us,” said EFF Executive Director Cindy Cohn. “We are so pleased that Craig Newmark Philanthropies has continued to support EFF in this important work for protecting journalists and people all over the world.”

Contact:  RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org

Greetings from the Internet! Connect with EFF this Summer

Tue, 07/06/2021 - 11:28am

Every July, we celebrate EFF’s birthday and its decades of commitment fighting for privacy, security, and free expression for all tech users. This year’s membership drive focuses on a central idea: analog or digital—what matters is connection. If the internet is a portal to modern life, then our tech must embrace privacy, security, and free expression for the users. You can help free the tubes when you join the worldwide community of EFF members this week.

Join EFF!

Free the Tubes. Boost the Signal.

Through July 20 only, you can become an EFF member for just $20, and get a limited-edition series of Digital Freedom Analog Postcards. Each piece of this set represents part of the fight for our digital future, from protecting free expression to opposing biometric surveillance. Send one to someone you care about and boost our signal for a better internet.

Physical space and cyberspace aren’t divided worlds anymore. The lines didn’t just blur during the last year; they entwined to help carry us through crisis. This laid bare the brilliance and dangers of a world online, showing us how digital policies will shape our lives. You can create a better future for everyone as an EFF member this year.

Boost the Signal

Will you help us encourage people to support internet freedom? It's a big job and it takes all of us. Here’s some language you can share with your circles:

Staying connected has never been more important. Help me support EFF and the fight for every tech users’ right to privacy, free speech, and digital access. https://eff.org/greetings

Twitter | Facebook | Email


Stay Golden

We introduce new member gear each summer to thank supporters and help them start conversations about online rights. This year's t-shirt design is a salute to our resilience and power when we keep in touch.

EFF Creative Director Hugh D'Andrade worked in this retrofuturist, neo-deco art style to create an image that references an optimistic view of the future that we can (and must) build together. The figure here is bolstered by EFF's mission and pale gold and glow-in-the-dark details. We have all endured incredible hardships over the last year, but EFF—with the strength of our relationships and the power of the web—never stopped fighting for a digital world that supports freedom, justice, and innovation for all people. Connect with us and we're unstoppable.

Donate TOday

K.I.T. Have a nice summer <3 EFF

The Future Is in Symmetrical, High-Speed Internet Speeds

Fri, 07/02/2021 - 7:32pm

Congress is about to make critical decisions about the future of internet access and speed in the United States. It has a potentially once-in-a-lifetime amount of funding to spend on broadband infrastructure, and at the heart of this debate is the minimum speed requirement for taxpayer-funded internet. It’s easy to get overwhelmed by the granularity of this debate, but ultimately it boils down to this: cable companies want a definition that requires them to do and give less. One that will not meet our needs in the future. And if Congress goes ahead with their definition—100 Mbps of download and 20 of upload (100/20 Mbps)—instead of what we need—100 Mbps of download and 100 Mbps of upload (100/100 Mbps)—we will be left behind.

In order to explain exactly why these two definitions mean so much, and how truly different they are, we’ll evaluate each using five basic questions below. But the too long, didn’t read version is this: in essence, building a 100/20 Mbps infrastructure can be done with existing cable infrastructure, the kind already operated by companies such as Comcast and Charter, as well as with wireless. But raising the upload requirement to 100 Mbps—and requiring 100/100 Mbps symmetrical services—can only be done with the deployment of fiber infrastructure. And that number, while requiring fiber, doesn’t represent the fiber’s full capacity, which makes it better suited to a future of internet demand. With that said, let’s get into specifics.

All of the following questions are based in what the United States, as a country, is going to need moving forward. It is not just about giving us faster speeds now, but preventing us from having to spend this money again in the future when the 100/20Mbps infrastructure eventually fails to serve us. It’s about making sure that high-quality internet service is available to all Americans, in all places, at prices they can afford. High-speed internet access is no longer a luxury, but a necessity.

Which Definition Will Meet Our Projected Needs in 2026 and Beyond?

Since the 1980s, consumer usage of the internet has grown by 21% on average every single year. Policymakers should bake into their assumption that 2026 internet usage will be greater than 2021 usage. Fiber has capacity decades ahead of projected growth, which is why it is future-proof. Moreover, high-speed wireless internet will likewise end up depending on fiber, because high-bandwidth wireless towers must have equally high-bandwidth wired connections to the internet backbone.

In terms of predicted needs in 2026, OpenVault finds that today’s average use is 207 Mbps/16 Mbps. If we apply 21% annual growth, that will mean 2026 usage will be over 500Mbps down and 40Mbps up. But another crucial detail is that the upload and download needs aren’t growing at the same speeds. Upload, which the average consumer used much less than download, is growing much faster. This is because we are all growing to use and depend on services that upload data much more. The pandemic underscored this, as people moved to remote socializing, remote learning, remote work, telehealth, and many other services that require high upload speeds and capacity. And even as we emerge from the pandemic, those models are not going to go away.

Essentially, the pandemic jumped our upload needs ahead of schedule, but it does not represent an aberration. If anything, it proved the viability of remote services. And our internet infrastructure must reflect that need, not the needs of the past.

The numbers bear this out, with services reporting upstream traffic increasing 56% in 2020. And if anything close to that rate of growth in upload demand persists, then the average upload demand will exceed 100Mbps by 2026. Those speeds will be completely unobtainable with infrastructure designed around 100/20 Mbps, but perfectly within reach of fiber-based networks.

Notably, all the applications and services driving the increased demand on upstream usage (telehealth, remote work, distance learning) are based on symmetric usage of broadband—that is 100/100 Mbps and not 100/20 Mbps. And future cloud-based computing services are predicted to actually need higher upload speeds than download speeds to function.

Which Definition Will Increase Upload Speeds Most Cost-Effectively?

With upload demand skyrocketing, networks will have to improve their capacity. However, the cable infrastructure that will be maintained by a 100/20 Mbps definition is already reaching its capacity. That means that, in order to upgrade, companies will eventually have to start replacing the old infrastructure with fiber anyway. Or, they will be stuck delivering below what Americans need. The same is true for wireless internet.

In other words, the only way to upgrade a non-fiber, 100/20 Mbps network is to connect it with fiber. There is just nowhere for the current infrastructure to go. Updating with fiber now saves everyone the cost of doing minor upgrades now and having to do fiber in a few years. Slow networks ultimately cost more than just going straight to fiber because they ultimately have to be replaced by fiber anyways and become wasted investments.

Furthermore, once on fiber, increasing your speed comes much more cheaply, since the hardware at the ends of the fiber connections can be upgraded without digging and laying new cables. You can see this with the financial data from Chattanooga’s municipal fiber entity in 2015 when they upgraded from 1 gigabit to 10 gigabits. They did not experience a substantial increase in costs to upgrade at all.

Which Definition Will Deliver Gigabit Speeds?

For the same reason 100/20 cable and wireless systems can’t easily improve their upload speed, they can’t also turn around and deliver gigabit speeds. Meanwhile, the same fiber network able to deliver 100/100 Mbps is actually also capable of also delivering 1000/1000 Mbps and 10,000/10,000 Mbps with affordable upgrades to its hardware. 80,000/80,000 Mbps is already possible now over the same fiber wire, though the price of the hardware remains high. As the price comes down, 80 gigabit symmetrical could become the next standard for fiber networks. Wireless connected with fiber benefits from these gains with the only limitation being the amount of available spectrum they have for wireless transmission.

Which Definition Will Give Americans an Affordable Option That Meets Their Needs Over Time?

There is zero chance a network built to deliver 100/20 Mbps that isn’t premised on fiber can offer a scalable, low-cost solution in the future, for all the reasons listed above. Capacity constraints on cable and non-fiber-based wireless drastically limit the extent to which they can add new users. Their solution is to offer significantly lower speeds than 100/20 Mbps to minimize the burden on their capacity-constrained network. But a fiber network can share the gains it makes from advancements in hardware because it does not experience a new cost burden to deliver a scalable solution. This is why Chattanooga was able to give its low-income students free 100/100 Mbps internet access during the pandemic at very little cost to the network.

Which Definition Makes the U.S. Globally Competitive?

Advanced markets in Asia, led by China, will connect total of 1 billion people to symmetrical gigabit lines. China years ago committed to deploying universal fiber, and it is rapidly approaching that goal. The U.S. could choose to do the same. However, if it instead chooses to upgrade some cable networks and push some slow wireless connectivity out to communities at 100/20 Mbps, our capacity to innovate and grow the internet technology sector will be severely hindered. After all, if the U.S. market is not capable of offering a communications infrastructure capable of running the next generation of applications and services due to slow obsolete speeds, then those applications and services will find their home elsewhere. Not only will this impact our ability to attract a technology sector, but all related industries dependent on connectivity will be relying on speeds vastly inferior to gigabit fiber-connected businesses.

In each one of these questions, it is clear that the government needs to invest in fiber infrastructure, which means defining what technology gets taxpayer dollars at 100/100 Mbps. While the existing monopolies would like to get that money for infrastructure they don’t actually have to build—old cable lines that can meet the 100/20 Mbps definition—that is doing a grave disservice to Americans.


 

Victory! Fourth Circuit Rules Baltimore’s Warrantless Aerial Surveillance Program Unconstitutional

Fri, 07/02/2021 - 2:22pm

This blog post was cowritten by EFF intern Lauren Yu.

The U.S. Court of Appeals for the Fourth Circuit ruled last week that Baltimore’s use of aerial surveillance that could track the movements of the entire city violated the Fourth Amendment.

The case, Leaders of a Beautiful Struggle v. Baltimore Police Department, challenged the Baltimore Police Department’s (BPD) use of an aerial surveillance program that continuously captured an estimated 12 hours of coverage of 90 percent of the city each day for a six-month pilot period. EFF, joined by the Brennan Center for Justice, Electronic Privacy Information Center, FreedomWorks, National Association of Criminal Defense Lawyers, and the Rutherford Institute, filed an amicus brief arguing that the two previous court decisions upholding the constitutionality of the program misapplied Supreme Court precedent and failed to recognize the disproportionate impact of surveillance, like Baltimore’s program, on communities of color. 

In its decision, the full Fourth Circuit found that BPD’s use and analysis of its Aerial Investigation Research (AIR) data was a warrantless search that violated the Fourth Amendment. Relying on the Supreme Court’s decisions in United States v. Jones and United States v. Carpenter, the Fourth Circuit held that Carpenter—which ruled that cell-site location information was protected under the Fourth Amendment and thus may only be obtained with a warrant—applied “squarely” to this case. The Fourth Circuit explained that the district court had misapprehended the extent of what the AIR program could do. The district court believed that the program only engaged in short-term tracking. However, the Fourth Circuit clarified that, like the cell-site location information tracking in Carpenter, the AIR program’s detailed data collection and 45-day retention period gave BPD the ability to chronicle movements in a “detailed, encyclopedic” record, akin to “attaching an ankle monitor to every person in the city.”

The court further stated that oversurveillance and the resulting overpolicing do not allow different communities to enjoy the same rights

That ability to deduce an individual’s movements over time violated Baltimore residents’ reasonable expectation of privacy. In making that determination, the court underscored the importance of considering not only the raw data that was gathered but also “what that data could reveal.” Contrary to the BPD’s claims that the aerial surveillance data was anonymous, the court pointed to studies that demonstrated the ease with which people could be identified by just a few points of their location history because of the unique and habitual way we all move. Moreover, the court stated that when this data was combined with Baltimore’s wide array of existing surveillance tools, deducing an individual’s identity became even simpler.

The court also recognized the racial and criminal justice implications of oversurveillance. It noted that although mass surveillance touches everyone, “its hand is heaviest in communities already disadvantaged by their poverty, race, religion, ethnicity, and immigration status,” and that the impact of high-tech monitoring is “conspicuous in the lives of those least empowered to object.” The court further stated that oversurveillance and the resulting overpolicing do not allow different communities to enjoy the same rights: while “liberty from governmental intrusion can be taken for granted in some neighborhoods,” others “experience the Fourth Amendment as a system of surveillance, social control, and violence, not as a constitutional boundary that protects them from unreasonable searches and seizures.”

In a powerful concurring opinion, Chief Judge Gregory dug deeper into this issue. Countering the dissent’s assumption that limiting police authority leads to more violence, the concurrence pointed out that Baltimore spends more per capita on policing than any comparable city, with disproportionate policing of Black neighborhoods. However, policing like the AIR program did not make the city safer; rather, it ignored the root issues that perpetuated violence in the city, including a long history of racial segregation, redlining, and wildly unequal distribution of resources.

We are pleased that the Fourth Circuit recognized the danger in allowing BPD to use mass aerial surveillance to track virtually all residents’ movements. Although Baltimore discontinued the program, it is far from the only city to employ such intrusive technologies. This decision is an important victory in protecting our Fourth Amendment rights and a big step toward ending intrusive aerial surveillance programs, once and for all.

EFF is Highlighting LGBTQ+ Issues Year-Round

Fri, 07/02/2021 - 12:19pm

EFF is dedicated to ensuring that technology supports freedom, justice and innovation for all the people of the world. While digital freedom is an LGBTQ+ issue, LGBTQ+ issues are also digital rights issues. For example, LGBTQ+ communities are often those most likely to experience firsthand how big tech can restrict free expression, capitulate to government repression, and undermine user privacy and security. In many ways, the issues faced by these communities today serve as a bellwether of the fights other communities will face tomorrow. This is why EFF is committing to highlight these issues not only during Pride month, but year-round on our new LGBTQ+ Issue Page.

Centering LGBTQ+ Issues

Last month many online platforms featured pride events and rainbow logos (in certain countries). But their flawed algorithms and moderation restrict the freedom of expression of the LGBTQ+ community year-round. Some cases are explicit, like when blunt moderation policies, responding in part to FOSTA-SESTA, shut down discussions of sexuality and gender. In other instances, platforms, such as TikTok, will more subtly restrict LGBTQ+ content allegedly to “protect” users from bullying– while promoting homophobic and anti-trans content.

Looking beyond the platforms, government surveillance of LGBTQ+ individuals is also a long standing concern, including such historic cases as 1960’s FBI Director J. Edgar Hoover’s maintaining a "Sex Deviant" file used for state abuse. In addition to government repression seen in the U.S. and internationally,  data collection from apps disproportionately increases the risk to LGBTQ+ people online and off, because exposing this data can enable targeted harassment. These threats in particular were explored in a blog post last month on Security Tips for Online LGBTQ+ Dating.

At Home with EFF: Pride Edition

For the second year in a row, EFF has held an At Home with EFF livestream panel to highlight these and other related issues, facilitated by EFF Technologist Daly Barnett. This year's panel featured Hadi Damien, co-president of InterPride; moses moon, a writer also known as @thotscholar; Ian Coldwater, Kubernetes SIG Security co-chair; and network security expert Chelsea Manning

This conversation featured a broad range of expert opinions and insight on a variety of topics, from how to navigate the impacts of tightly controlled social media platforms, to ways to conceptualize open source licensing to better protect LGBTQ+ individuals.  

If you missed this informative discussion, you can still view it in its entirety on the EFF Facebook, Periscope, or YouTube page (video below):

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2Fj1syMyq7cFM%3Fautoplay%3D1%26mute%3D1%22%20title%3D%22YouTube%20video%20player%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%26amp%3Bamp%3Bamp%3Bnbsp%3B%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

LGBTQ+ community resources

Now that June has drawn to a close, there are some ongoing commitments from EFF which can help year-round. For up-to-date information on LGBTQ+ and digital rights issues, you can refer to EFF’s new LGBTQ+ issue page. Additionally EFF maintains an up-to-date digital security advice project, Surveillance Self Defense, which includes a page specific to LGBTQ+ youth

LGBTQ+ activists can refer to the EFF advocacy toolkit, and, if their work intersect with digital rights, are invited to reach out to the EFF organizing team at organizing@eff.org. People regularly engaging in digital rights and LGBTQ+ issues should also consider joining EFF’s own grassroots advocacy network, the Electronic Frontier Alliance.

Supreme Court Narrows Ability to Hold U.S. Corporations Accountable for Facilitating Human Rights Abuses Abroad

Thu, 07/01/2021 - 7:46pm

People around the world have been horrified at the role that technology companies like Cisco, Yahoo!, and Sandvine have played in helping governments commit gross human rights abuses. That’s why EFF has consistently called out technology companies, and American companies in particular, that allow their internet surveillance and censorship products and services to be used as tools of repression and persecution, rather than tools to uplift humanity. Yet legal mechanisms to hold companies accountable for their roles in human rights violations are few and far between.

The Supreme Court has now further narrowed one mechanism: the Alien Tort Statute (ATS). We now call on Congress to fill the gaps where the Court has failed to act.

The Supreme Court recently issued an opinion in Nestlé USA, Inc. v. Doe, in which we filed an amicus brief (along with Access Now, Article 19, Privacy International, Center for Long-Term Cybersecurity, and Ronald Deibert, director of Citizen Lab at University of Toronto.) Former child slaves on cocoa farms in Côte d’Ivoire claimed that two American chocolate companies, Nestlé USA and Cargill, facilitated their abuse at the hands of the farm operators by providing training, fertilizer, tools, and cash in exchange for the exclusive right to buy cocoa. The plaintiffs sued under the ATS, a law first passed by Congress in 1789, which allows foreign nationals to bring civil claims in U.S. federal court against defendants who violated “the law of nations or a treaty of the United States,” which many courts have recognized should include violations of modern notions of human rights, including forced labor.

EFF’s brief detailed how surveillance, communications, and database systems, just to name a few, have been used by foreign governments—with the full knowledge of and assistance by the U.S. companies selling those technologies—to spy on and track down activists, journalists, and religious minorities who have then been imprisoned, tortured, and even killed.

First, the Bad News

The centerpiece of the Supreme Court’s opinion is about what has to happen inside the U.S. to make an American company liable, since the Court’s earlier decision in Kiobel v. Royal Dutch Petroleum (2013) had rejected the idea that a multinational corporation based in the U.S. could be held liable solely for its actions abroad. The former child slaves alleged that Nestlé USA and Cargill made “every major operational decision” in the United States, along with, of course, pocketing the profits. This U.S. activity was in addition to the training, fertilizer, tools, and cash the companies provided to farmers abroad in exchange for the exclusive right to buy cocoa. The Court rejected this “operational decision” connection to the U.S. as a basis for ATS liability, saying: 

Because making “operational decisions” is an activity common to most corporations, generic allegations of this sort do not draw a sufficient connection between the cause of action respondents seek—aiding and abetting forced labor overseas—and domestic conduct … To plead facts sufficient to support a domestic application of the ATS, plaintiffs must allege more domestic conduct than general corporate activity.

We strongly disagree with the Court. When a company or an employee leads the company’s operations from within the United States and pockets profits from human rights abuses suffered abroad, the courts in the United States must exercise jurisdiction to hold them accountable. This is especially important when victims have few other options, as is often the case for people living under repressive or corrupt regimes. 

But this decision should be of little comfort to companies that take material steps in the U.S. to develop digital tools that are used to facilitate human rights abuses abroad—companies like Cisco, which, according to the plaintiffs in that case, specifically created an internet surveillance system for the Chinese government that targeted minority groups like the Falun Gong for repression. Building surveillance tools for the specific purpose of targeting religious minorities is not merely an “operational decision,” even under the Supreme Court’s crabbed view. EFF’s Know your Customer framework is a good place to start for any company seeking to stay on the right side of human rights.

Next, Some Good News

While we are not happy with the Nestlé decision, it did not embrace some of the more troubling arguments from the companies.

A key question on appeal was whether U.S. corporations should be immune from suit under the ATS, that is, whether ATS defendants may only be natural persons. The Supreme Court had already held in Jesner v. Arab Bank (2018) that foreign corporations are immune from suit under the ATS, meaning that U.S. courts don’t have jurisdiction over a company, for example, based in Europe relying on forced labor in Asia. Thus, the question remained outstanding as to U.S. corporations. In Nestlé, five justices (Sotomayor, Breyer, Kagan, Gorsuch, Alito) agreed that the ATS should apply to U.S. corporations.

As Justice Gorsuch wrote in his concurring opinion, “Nothing in the ATS supplies corporations with special protections against suit… Generally [] the law places corporations and individuals on equal footing when it comes to assigning rights and duties.” This was refreshing consistency, as the Court has held that corporations are “persons” in other legal contexts, including for purposes of free speech, and the companies in this case had pushed hard for a blanket corporate exception to ATS liability.

Corporate accountability dodged another bullet as it appears that a majority of the Court agreed that federal courts may continue to recognize under certain circumstances, as the Court addressed in Sosa v. Alvarez-Machain (2004), new causes of action for violations of modern conceptions of human rights, given that the “law of nations” has evolved over the centuries. This might include new substantive claims, such as for child slavery, or indirect forms of liability, such as aiding and abetting. Justice Sotomayor, joined by Breyer and Kagan, was explicit about this in her concurrence. She cited the Court’s opinion in Jesner, which notes “the evolving recognition … that certain acts constituting crimes against humanity are in violation of basic precepts of international law.”

Only three justices (Thomas, Gorsuch, Kavanaugh) would have limited the ATS to a very narrow set of historical claims involving piracy or violations of the rights of diplomats and “safe conducts.” These justices would prohibit new causes of action under the ATS, including the claim of aiding and abetting child slavery at issue in Nestlé

Next Step: Congress

As the Supreme Court has increasingly tightened its view of the ATS, in large part because the law is very old and not very specific, the Nestlé decision should be a signal for Congress. Justice Thomas’ opinion goes to great lengths to praise Congress’ ability to create causes of actions and forms of liability by statute, grounded in international law. He argues “that there always is a sound reason to defer to Congress.”

Congress should take him up on this invitation and act now to ensure U.S. courts remain avenues of redress for victims of human rights violations—especially as American companies continue to be leaders in developing and selling digital tools of repression to foreign governments. Any American company that puts profits over human rights should face real accountability for doing so.

Related Cases: Doe I v. Cisco

Victory! Federal Court Halts Florida’s Censorious Social Media Law Privileging Politicians’ Speech Over Everyday Users

Thu, 07/01/2021 - 3:44pm

A federal court on Thursday night blocked Florida’s effort to force internet platforms to host political candidates and media entities online speech, ruling that the law violated the First Amendment and a key federal law that protects users’ speech. We had expected the court to do so.

The Florida law, S.B. 7072, prohibited large online intermediaries—save for those that also happened to own a theme park in the state—from terminating politicians’ accounts or taking steps to de-prioritize their posts, regardless of whether it would have otherwise violated the sites’ own content policies. The law also prevented services from moderating posts by anyone who qualified as “journalistic enterprise” under the statute, which was so broadly defined as to include popular YouTube and Twitch streamers.

EFF and Protect Democracy filed a friend-of-the-court brief in the case, NetChoice v. Moody, arguing that although online services frequently make mistakes in moderating users’ content, disproportionately harming marginalized voices, the Florida statute violated the First Amendment rights of platforms and other internet users. Our brief pointed out that the law would only have “exacerbate[ed] existing power disparities between certain speakers and average internet users, while also creating speaker-based distinctions that are anathema to the First Amendment.”

In granting a preliminary injunction barring Florida officials from enforcing the law, the court agreed with several arguments EFF made in its brief. As EFF argued, the “law itself is internally inconsistent in that it requires ‘consistent’ treatment of all users, yet by its own terms sets out two categories of users for inconsistent special treatment.”

The court agreed, writing that the law “requires a social media platform to apply its standards in a consistent manner, but . . . this requirement is itself inconsistent with other provisions.”

The court also found that the law intruded upon online services’ First Amendment rights to set their own content moderation policies, largely because it mandated differential treatment of the content of certain online speakers, such as political candidates, over others. These provisions made the law “about as content-based as it gets,” the court wrote.

Because the law amounted to a content- and viewpoint-based restriction on speech, Florida was required to show that it had a compelling interest in the restrictions and that it doesn’t burden any more or less speech than is necessary to advance that interest.

The court ruled the Florida law failed that test. “First, leveling the playing field—promoting speech on one side of an issue or restricting speech on the other—is not a legitimate state interest,” the court wrote.

Further, the law’s speech restrictions and burdens swept far beyond addressing concerns about online services silencing certain voices, as the court wrote that the law amounted to “an instance of burning the house to roast the pig.”

As EFF wrote in its brief, inconsistent and opaque content moderation by large online media services is a legitimate problem that leads to online censorship of too much important speech. But coercive measures like S.B. 7072 are not the answer to this problem:

The decisions by social media platforms to cancel accounts and deprioritize posts may well be scrutinized in the court of public opinion. But these actions, as well as the other moderation techniques barred by S.B. 7072, are constitutionally protected by binding Supreme Court precedent, and the state cannot prohibit, proscribe, or punish them any more that states can mandate editorial decisions for news media.

EFF is pleased that the court has temporarily prohibited Florida from enforcing S.B. 7072 and we look forward to the court issuing a final ruling striking the law down. We would like to thank our local counsel, Christopher B. Hopkins, at McDonald Hopkins LLC for his help in filing our brief.

Nominations Open for 2021 Barlows!

Thu, 07/01/2021 - 3:00am

Nominations are now open for the 2021 Barlows to be presented at EFF's 30th Annual Pioneer Award Ceremony. Established in 1992, the Pioneer Award Ceremony recognizes leaders who are extending freedom and innovation in the realm of technology. In honor of Internet visionary, Grateful Dead lyricist, and EFF co-founder John Perry Barlow, recipients are awarded a “Barlow," previously known as the Pioneer Awards. The nomination window will be open until July 15th at noon, 12:00 PM Pacific time. You could nominate the next Barlow winner today!

What does it take to be a Barlow winner? Nominees must have contributed substantially to the health, growth, accessibility, or freedom of computer-based communications. Their contributions may be technical, social, legal, academic, economic or cultural. This year’s winners will join an esteemed group of past award winners that includes the visionary activist Aaron Swartz, global human rights and security researchers The Citizen Lab, open-source pioneer Limor "Ladyada" Fried, and whistle-blower Chelsea Manning, among many remarkable journalists, entrepreneurs, public interest attorneys, and others.

The Pioneer Award Ceremony depends on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the Pioneer Award Ceremony, please email nicole@eff.org.

Remember, nominations close on July 15th at noon, 12:00 PM Pacific time! After you nominate your favorite contenders, we hope you will consider joining our virtual event this fall to celebrate the work of the 2020 winners. If you have any questions or if you'd like to receive updates about the event, please email events@eff.org.

GO TO NOMINATION PAGE

Nominate your favorite digital rights hero now!

Victory! Biden Administration Rescinds Dangerous DHS Proposed Rule to Expand Biometrics Collection

Wed, 06/30/2021 - 6:17pm

Marking a big win for the privacy and civil liberties of immigrant communities, the Biden Administration recently rescinded a Trump-era proposed rule that would have massively expanded the collection of biometrics from people applying for an immigration benefit. Introduced in September 2020, the U.S. Department of Homeland Security (DHS) proposal would have mandated biometrics collection far beyond the status quo—including facial images, voice prints, iris scans, DNA, and even behavioral biometrics—from anyone applying for an immigration benefit, including immigrants and often their U.S. citizen and permanent resident family members.

The DHS proposed rule garnered more than 5,000 comments in response, the overwhelming majority of which opposed this unprecedented expansion of biometrics. Five U.S. Senators also demanded that DHS abandon the proposal.

EFF, joined by several leading civil liberties and immigrant rights organizations, submitted a comment that warned the proposal posed grave threats to privacy, in part because it permitted collection of far more data than needed to verify a person’s identity and stored all data collected in the same place—amplifying the risk of future misuse or breach. EFF’s comment also highlighted the burden on First Amendment activity, particularly because the breadth of sensitive biometrics required by the proposal could lay the groundwork for a vast surveillance network capable of tracking people in public places. That harm would disproportionately impact immigrants, communities of color, religious minorities, and other marginalized communities.

In its final days, the Trump Administration failed to finalize the proposed rule. Civil liberties and immigrant rights organizations, including EFF, pushed hard during the transition period to rescind it. Last month, the Biden Administration did just that.

The rescission of this dangerous proposal is important to protecting the privacy rights of immigrant communities. However, those rights have been continuously eroded, including a regulation enacted last year that mandates DHS to collect DNA from people in U.S. Immigration and Customs Enforcement (ICE) and U.S. Customs and Border Protection (CBP) custody and enter it into the FBI’s CODIS database. Recent reporting has shown this practice even more widespread than anticipated, with border officers relying on the regulation to collect DNA from asylum-seekers.

History has long shown that the surveillance we allow against vulnerable communities often makes its way to affecting the rest of the population. While the rescission of this proposed rule is a good first step, the battle for the privacy rights of immigrants—and for all of us—continues.

Related Cases: Federal DNA CollectionDNA Collection

PCLOB “Book Report” Fails to Investigate or Tell the Public the Truth About Domestic Mass Surveillance

Wed, 06/30/2021 - 12:48pm

The Privacy and Civil Liberties Oversight Board (PCLOB) has concluded its six-year investigation into Executive Order 12333, one of the most sprawling and influential authorities that enables the U.S. government’s mass surveillance programs. The result is a bland, short summary of a classified report, as well as a justified, scathing, and unprecedented unclassified statement of opposition from PCLOB member Travis LeBlanc.

Let’s start with the fact that the report is still classified—the PCLOB is supposed to provide public access to its work “to the greatest extent” consistent with the law and the needs of classification.  Yet the public statement here is just 26  pages describing, rather than analyzing, the program. Nothing signals to the public a lack of commitment to transparency and a frank assessment of civil liberties violations like blocking the public from even reading a report about one of the most invasive U.S. surveillance programs.

Member LeBlanc rightly points out that, at a minimum, the PCLOB should have sought to have as much of its report declassified as possible, rather than issuing what he correctly criticizes as more like a “book report” than an expert legal and technical assessment. 

The PCLOB was created after a recommendation by the 9/11 Commission to address important civil liberties issues raised by intelligence community activities. While its first report about Section 215 was critical in driving Congress to scale back that program, other PCLOB reports have been less useful. EFF sharply disagreed with the Board’s findings in 2014 on surveillance under FISA Section 702, especially where it found that the Section 702 program is sound “at its core,” and provides “considerable value” in the fight against terrorism—despite going on to make ten massive recommendations for what the program must do to avoid infringing on people’s privacy.

But even by the standards of past PCLOB reports, this latest report represents a new low, especially when addressing the National Security Agency’s XKEYSCORE. XKEYSCORE is a tool that the NSA uses to sift through the vast amounts of data it obtains, including under Executive Order 1333. As the Guardian reported in 2013 based upon Edward Snowden's revelations, XKEYSCORE gives analysts the power to watch—in real time—anything a person does on the Internet. There are real issues raised by this tool and, as LeBlanc notes, other than by the PCLOB, the XKEYSCORE program is “unlikely to be scrutinized by another independent oversight authority in the near future.” 

LeBlanc writes that his opposition to the report stems from: 

  1. The unwillingness of the investigation into Executive Order 12333 to scrutinize modern technological surveillance issues, such as algorithmic decision making, and their impact on privacy and civil liberties; 
  2. A failure of the Board majority to investigate and evaluate not just how XKEYSCORE can query online communications that the NSA already has, but the legal authority and technological mechanisms that allow it to collect that data in the first place; 
  3. The decision to leave out of the report any analysis of the actual effectiveness, costs, or benefits of XKEYSCORE; 
  4. The haphazard and unthoughtful way the NSA defended its legal justification for the program’s use—and the Board’s unwillingness to probe into any possible issues of compliance; 
  5. A vote to exclude LeBlanc and Board member Ed Felten’s additional recommendations from the report; 
  6. The unwillingness of the Board to attempt to declassify the full report or inform the public about it, which LeBlanc labels as “inexcusable,” and,
  7. The unconventional process by which the Board voted to release the report. 

Any one of these concerns would be significant.Taken together they are a scathing indictment of an oversight board that appears to be unable or unwilling to exercise actual oversight.

As LeBlanc notes, there is so much about XKEYSCORE and the NSA’s operations under Executive Order 12333 that require more public scrutiny and deep analysis of their legality. But it seems impossible to achieve this under the current regime of overbroad secrecy and PCLOB's refusal to play its role in both analyzing the programs and giving the public the information it needs. LeBlanc rightly notes that the report ignores the “collection” of information and that both collection and querying “are worthy of review for separate legal analysis, training, compliance and audit processes.” As we have long argued, this analysis is important “whether the collection and querying activities are performed by humans or machines.”

He also notes that the PCLOB failed to grapple with so-called “incidental” collection—how ordinary Americans are caught up in mass surveillance even when they are not the targets. And he notes that the PCLOB failed to investigate compliance and accepted a legality analysis of XKEYSCORE by the NSA’s Office of General Counsel that appears to only have been written after the PCLOB requested it, despite the program operating for at least a decade before. What's more, the review fails to take into consideration the Supreme Court’s more modern analysis of the Fourth Amendment. Those are just some of his concerns—all of which we share. 

Nor did the PCLOB analyze the effectiveness of the program. Basic questions remain unanswered, like whether this program has ever saved any lives. Or whether, like so many other mass surveillance programs, any necessary information it gathered could not be gathered in another manner? These are all important questions that we deserve to know—and that at least one member of the PCLOB board agrees we deserve to know.

On December 4, 1981, Ronald Reagan signed Executive Order 12333, which gave renewed and robust authorities to the U.S. intelligence community to begin a regime of data collection that would eventually encompass the entire globe. Executive Order 12333 served as a damning pivot after less than a decade of reforms and mea culpas. The 1975 report of the Church Committee revealed, and attempted to end, three decades of lawless surveillance, repression, blackmail, and sabotage that the FBI, CIA, and NSA wrought onto Americans and the rest of the world. Executive Order 12333 returned the intelligence community to its original state: legally careless, technically irresponsible, and insatiable for data.

Twenty-three years after Reagan signed Executive Order 12333, PCLOB was established as a counterbalance to the intelligence community’s free reign over the years. But it’s clear that despite some early achievements, the PCLOB is not living up to its promise. That’s why cases like EFF’s Jewel v. NSA, while not about XKEYSCORE or Executive Order 13333 per se, are critically important to ensure that our constitutional and statutory rights remain protected by the courts, since independent oversight is failing. But at minimum, the PCLOB owes the public the truth about mass surveillance—and even its members are starting to see that. 

Related Cases: Jewel v. NSA

Setbacks in the FTC’s Antitrust Suit Against Facebook Show Why We Need the ACCESS Act

Wed, 06/30/2021 - 12:58am

After a marathon markup last week, a number of bills targeting Big Tech’s size and power, including the critical ACCESS Act, were passed out of committee and now await a vote by the entire House of Representatives. This week, decisions by a federal court tossing out both the Federal Trade Commission’s (FTC) antitrust complaint against Facebook and a similar one brought by 48 state Attorneys General, underscore why we need those new laws.

The federal court in DC ruled narrowly that the FTC hadn’t included enough detail in its complaint about Facebook’s monopoly power, and gave it 30 days to do so.  That alone was troubling, but probably not fatal to the lawsuit. A more ominous problem came in what lawyers call dicta: the court opined that even if Facebook had monopoly power, its refusal to interoperate with competing apps was OK. This decision highlights many of the difficulties we face in applying current antitrust law to the biggest Internet companies, and shows why we need changes to the law to address modern competition problems caused by digital platform companies like Facebook.

When the FTC filed its suit in December 2020, and 48 U.S. states and territories filed a companion suit, we celebrated the move, but we predicted two challenges: 1) proving that Facebook has monopoly power and 2) overcoming Facebook’s defenses around preserving its own ability and incentives to innovate. Yesterday’s decision by Judge James E. Boasberg of the Federal District Court for D.C. touched on both of those.

What Is Facebook, Exactly?

To make a case under the part of antitrust law that deals with monopolies—Section 2 of the Sherman Act—a plaintiff has to show that the accused company is a monopoly, legally speaking, or trying to become one. And being a monopoly doesn’t just mean being a really large company or commanding a lot of public attention and worry. It means that within some “market” for goods or services, the company has a large enough share of commerce that the prices it charges, or the quality of its products, aren’t constrained by rivals. Defining the market is a hugely important step in monopoly cases, because a broad definition could mean there’s no monopoly at all.

Judge Boasberg accepted the FTC’s market definition for social networks, at least for now. Facebook argued that its market could include all kinds of communications tools that don’t employ a “social graph” to link each user’s personal connections. In other words, Facebook argued that it competes against lots of other communications tools (including, perhaps, telephone calls and email) so it’s not a monopolist. Judge Boasberg rejected that argument, ruling that the FTC’s definition of “personal social networks” as a unique product was at least “theoretically rational.”

But the judge also ruled that the FTC hadn’t included enough detail in its complaint about Facebook’s power within the social network market. While the FTC alleged that Facebook controlled “in excess of 60%” of the market, the agency didn’t say anything about how it arrived at that figure, nor which companies made up the other 40%. In an “unusual, nonintuitive” market like social networks, the judge said, a plaintiff has to put more detail in its complaint beyond a market share percentage.

Even though market definition questions are often the place where a monopoly case lives or dies, this issue doesn’t seem to be fatal to the FTC’s case. The agency can probably file an amended complaint giving more detail about who’s in the social networking market and how big Facebook’s share of that market is. Alternatively, the FTC might allege that Facebook has monopoly power because it has repeatedly broken its public promises about protecting users’ privacy, and otherwise become even more of a surveillance-powered panopticon, without losing any significant number of users. This approach is equivalent to alleging that a company is able to raise prices without losing customers—a traditional test for monopoly power.

The case has a long way to go, because the FTC (and the states) still have to prove their allegations with evidence. We can expect a battle between expert economists over whether Facebook actually competes with LinkedIn, YouTube, Twitter, email, or the comments sections of newspaper sites. But in the meantime, the case is likely to clear an important early hurdle.

Interoperability is Not Required – Even for a Monopolist

Another part of the court’s decision is more troubling. Facebook doesn’t allow third-party apps to interoperate with Facebook if they “replicate Facebook’s core functionality”—i.e. if they compete with Facebook. The FTC alleged that this was illegal given Facebook’s monopoly power. Judge Boasberg disagreed, writing that “a monopolist has no duty to deal with its competitors, and a refusal to do so is generally lawful even if it is motivated . . . by a desire ‘to limit entry’ by new firms or impede the growth of existing ones.” A monopolist’s “refusal to deal” with a competitor, wrote the judge, is only illegal when the two companies already had an established relationship, and cutting off that relationship would actually hurt the monopolist in the short term (a situation akin to selling products at a loss to force a rival out of business, known as “predatory pricing.”). Facebook’s general policy of refusing to open its APIs to competing apps didn’t fit into this narrow exception.

This ruling sends a strong signal that the FTC won’t be able to use its lawsuit to compel Facebook to allow interoperability with potential competitors. This decision doesn’t end the lawsuits, because Judge Boasberg ruled that the FTC’s challenge to Facebook’s gobbling up potential rivals like Instagram and WhatsApp was valid and could continue. Cracking down on tech monopolists’ aggressive acquisition strategies is an important part of dealing with the power of Big Tech. But the court’s dismissal of the interoperability theory is a significant loss.

We hope the FTC and the states appeal this decision at the appropriate time, because the law can and should require companies with a gatekeeper role on the Internet to interoperate with competing applications. As we’ve written, interoperability helps solve the monopoly problem by allowing users to leave giant platforms like Facebook without leaving their friends and social connections behind. New competitors could compete by offering their users better privacy protections, better approaches to content moderation, and so on, which in turn would force the Big Tech platforms to do better on these fronts. This might be possible under today’s antitrust laws, particularly if courts are willing to adopt a broader concept of “predatory conduct” that encompasses Big Tech’s strategy of foregoing profits for many years while growing an unassailable base of users and their data—an approach that incoming FTC chair Lina Khan suggested in her seminal paper about Amazon. We hope the FTC pursues an interoperability solution and doesn’t let this important part of the case fade away.

But we shouldn’t bet the future of the Internet on a judicial solution, because Judge Boasberg’s ruling on interoperability in this case is no outlier. Many courts, including the Supreme Court, have generally been comfortable with monopolists standing at the gates they have built and denying entry to anyone who might one day threaten their power. We need a change in the law. That’s why EFF supports the ACCESS Act, which the House Judiciary Committee approved last week with bipartisan support. ACCESS will require the biggest online platforms to interoperate with third-party apps through APIs established by technical committees and approved by the FTC, while still requiring all participants to safeguard users’ privacy and security.

We’re pleased to see the antitrust cases against Facebook continue, and we trust that the FTC attorneys under Lina Khan will give it their all, along with the states who continue to champion users in this fight. But we’re worried about the limitations of today’s antitrust law, as shown by yesterday’s decision. The ACCESS Act, along with the other Big Tech-related bills advanced last week and similar efforts in the Senate, are badly needed.

A Wide, Diverse Coalition Agrees on What Congress Needs to Do About Our Broadband

Tue, 06/29/2021 - 11:38am

A massive number of groups representing interests as diverse as education, agriculture, the tech sector, public and private broadband providers, low-income advocacy, workers, and urban and rural community economic development entities came together on a letter to ask Congress to be bold in its infrastructure plan. They are asking the U.S. Congress to tackle the digital divide with the same purpose and scale as we did for rural electrification. It also asks Congress to focus on delivering 21st century future-proof access to every American. While so many slow internet incumbents are pushing Congress to go small and do little, a huge contingent of this country is eager for Congress to solve the problem and end the digital divide.

What Unifies so Many Different Groups? Fully Funding Universal, Affordable, Future-Proof Access

For months Congress has been hounded by big ISP lobbyists interested in preserving their companies’ take of government money. However, the big ISPs—your AT&Ts, Comcasts, and the former Time Warner Cable—want to preserve the monopolies that have resulted in our current limited, expensive, slow internet access. All Americans have the opposite needs and interests. We need a strong focus on building 21st century ready infrastructure to everyone.

At the core of all new networks lies fiber optic wires, which is an inconvenient fact for legacy monopolies that intended to rely on obsolete wires for years to come. And all this opposition is happening while a billion fiber lines are being laid in the advanced Asian markets, primarily led by China, calling into question whether the United States wants to keep up or be left behind. They’ve argued that broadband is very affordable and that efforts to promote the public model to address rural and low-income access was akin to a “Soviet” take over of broadband.

But our collective lived experience, from a pandemic where kids are doing homework in fast food parking lots in large cities to rural Americans lacking the ability to engage meaningfully in remote work and distance learning, makes clear we need a change. For cities, where it is profitable to fully serve, its clear low-income people have been discriminated against and are being skipped through digital redlining. For rural Americans who have basic internet access, they are forced to rely on inferior and expensive service that is incapable of providing access to the modern internet (let alone the future).

ISPs have obscured this systemic problem by lobbying to continue to define 25/3 mbps as sufficient for connecting to the internet. That metric is unequivocally useless for assessing the status of our communications infrastructure. It is a metric that makes it look like the U.S. has more coverage than it does, since it represents the peak performance of old, outdated internet infrastructure.

It is therefore important to raise the standard to accurately reflect what is actually needed today and for decades to come. What we build under a massive federal program needs to meet that standard, not one from the earlier days of broadband. Not doing so will mean repeating the mistakes of the past where a large portion of $45 billion in federal subsidies have been invested in obsolete infrastructure. Those wires have hit their maximum potential, and no amount of money handed over to current large ISPs will change that fact. We have to get it right this time.

EFF to Ecuador's Human Rights Secretariat: Protecting Security Experts is Vital to Safeguard Everyone's Rights

Tue, 06/29/2021 - 12:00am

Today, EFF sent a letter to Ecuador's Human Rights Secretariat about the troubling, slow-motion case against the Swedish computer security expert Ola Bini since his arrest in April 2019, following Julian Assange's ejection from Ecuador's London Embassy. Ola Bini faced 70 days of imprisonment until a Habeas Corpus decision considered his detention illegal. He was released from jail, but the investigation continued, seeking evidence to back the alleged accusations against the security expert.

The circumstances around Ola Bini's detention, which was fraught with due process violations described by his defense, sparked international attention and indicated the growing seriousness of security experts' harassment in Latin America. The criminal prosecution has dragging out for two years since Bini’s release. And as a suspect under trial, Ola Bini continues to be deprived of the full enjoyment of his rights. During 2020, pre-trial hearings set for examining Bini's case were suspended and rescheduled at least five times. The Office of the IACHR Special Rapporteur for Freedom of Expression expressed concern  with this delay in its 2020's annual report.

Last suspended in December, the pre-trial hearing is set to continue this Tuesday (6/29). Ecuador’s new President, Guillermo Lasso, recently appointed a new head for the country's Human Rights Secretariat, Ms. Bernarda Ordoñez Moscoso. We hope Ms. Ordoñez can play a relevant role by bridging the protection of security experts to the Secretariat's mission of upholding human rights.

EFF's letter calls upon Ecuadors’ Human Rights Secretariat to give special attention to Ola Bini’s upcoming hearing and prosecution. As we've stressed in our letter,

Mr. Bini's case has profound implications for, and sits at the center of, the application of human rights and due process, a landmark case in the context of arbitrarily applying overbroad criminal laws to security experts. Mr. Bini's case represents a unique opportunity for the Human Rights Secretariat Cabinet to consider and guard the rights of security experts in the digital age.  Security experts protect the computers upon which we all depend and protect the people who have integrated electronic devices into their daily lives, such as human rights defenders, journalists, activists, dissidents, among many others. To conduct security research, we need to protect the security experts, and ensure they have the tools to do their work.

Ola Bini's arrest happened shortly after Ecuador's Interior Minister at the time, María Paula Romo, held a press conference to claim that a group of Russians and Wikileaks-connected hackers were in the country, planning a cyber-attack in retaliation for the government's eviction of Julian Assange from Ecuador's London Embassy. However, no evidence to back those claims was provided by Romo.

EFF has been tracking the detention, investigation, and prosecution of Ola Bini since its early days in 2019. We conducted an on-site visit to the country's capital, Quito, in late July that year, and underscored the harmful impact that possible political consequences of the case were having on the security expert's chances of receiving a fair trial. Later on, a so-called piece of evidence was leaked to the press and taken to court: a photo of a screenshot, taken by Bini himself and sent to a colleague, showing the telnet login screen of a router.

As we've explained, the image is consistent with someone who connects to an open telnet service, receives a warning not to log on without authorization, and does not proceed—respecting the warning. As for the portion of Bini's message exchange with a colleague, leaked with the photo, it shows their concern with the router being insecurely open to telnet access on the wider Internet, with no firewall.

More recently, in April 2021, Ola Bini’s Habeas Data recourse, filed in October 2020 against the National Police, the Ministry of Government, and the Strategic Intelligence Center (CIES), was partially granted by the Judge. According to Bini's defense, he had been facing continuous monitoring by members of the National Police and unidentified persons. The decision requested CIES to provide information related to whether the agency has conducted surveillance activities against the security expert. The ruling concluded that CIES unduly denied such information to Ola Bini, failing to offer a timely response to his previous information request.

EFF has a longstanding history of countering the unfair criminal persecution of security experts, who have unfortunately been the subject of the same types of harassment as those they work to protect, such as human rights defenders and activists. The flimsy allegations against Ola Bini, the series of irregularities and human rights violations in his case, as well as its  international resonance, situate it squarely among other cases we have seen of politicized and misguided allegations against technologists and security researchers. 

We hope Ecuador's Human Rights Secretariat also carefully examines the details surrounding Ola Bini's prosecution, and follows its developments so that the security expert can receive a fair trial. We respectfully urge that body to assess and address the complaints of injustice, which it is uniquely and well-positioned to do. 

Supreme Court Says You Can’t Sue the Corporation that Wrongly Marked You A Terrorist

Mon, 06/28/2021 - 1:41pm

In a 5-4 decision, the Supreme Court late last week barred the courthouse door to thousands of people who were wrongly marked as “potential terrorists” by credit giant TransUnion. The Court’s analysis of their “standing” —whether they were sufficiently injured to file a lawsuit—reflects a naïve view of the increasingly powerful role that personal data, and the private corporations that harvest and monetize it, play in everyday life. It also threatens Congressional efforts to protect our privacy and other intangible rights from predation by Facebook, Google and other tech giants.

Earlier this year, we filed an amicus brief, with our co-counsel at Hausfeld LLP, asking the Court to let all of the victims of corporate data abuses have their day in court.

What Did the Court Do?

TransUnion wrongly and negligently labelled approximately 8,000 people as potential terrorists in its databases. It also made that dangerous information available to businesses across the nation for purposes of making credit, employment, and other decisions. TransUnion then failed to provide the required statutory notice of the mistake. The Supreme Court held this was not a sufficiently “concrete” injury to allow these people to sue TransUnion in federal court for violating their privacy rights under the Fair Credit Reporting Act. Instead, the Court granted standing only to the approximately 1,800 of these people whose information was actually transmitted to third parties.

The majority opinion, written by Justice Kavanaugh, fails to grapple with how consumer data is collected, analyzed, and used in modern society. It likened the gross negligence resulting in a database marking these people as terrorists to “a letter in a drawer that is never sent.” But the ongoing technological revolution is not at all like a single letter. It involves large and often interconnected set of corporate databases that collect and hold a huge amount of our personal information—both by us and about us. Those information stores are then used to create inferences and analysis that carry tremendous and often new risks for us that can be difficult to even understand, much less trace. For example, consumers who are denied a mortgage, a job, or another life-altering opportunity based upon bad records in a database or inferences based upon those records will often be unable to track the harm back to the wrongdoing data broker. In fact, figuring out how decisions were made, much less finding the wrongdoer, has become increasingly difficult as an opaque archipelago of databases are linked and used to build and deploy machine learning systems that judge us and limit our opportunities.

This decision is especially disappointing after the Court’s recent decisions, such as Riley and Carpenter, that demonstrated a deep understanding that new technology requires new approaches to privacy law.

This decision is especially disappointing after the Court’s recent decisions, such as Riley and Carpenter, that demonstrated a deep understanding that new technology requires new approaches to privacy law. The Court concluded in these cases that when police collect and use more and more of our data, that fundamentally changed the inquiry about our Fourth Amendment right to privacy and the Court could not rigidly follow pre-digital cases. The same should be true when new technologies are used by private entities in ways that threaten our privacy.

The majority’s dismissal of Congressional decision-making is also extremely troubling. In 1970, at the dawn of the database era, Congress decided that consumers should have a cause of action based upon a credit reporting agency failing to take reasonable steps to ensure that the data they have is correct. Here, TransUnion broke this rule in an especially reckless way: it marked people as potential terrorists simply because they shared the same name as people on a terrorist watch list without checking middle names, birthdays, addresses, or other information that TransUnion itself undoubtedly already had. The potential harms this could cause are particularly obvious and frightening. Yet the Court decided that, despite Congress’ clear determination to grant us the right to a remedy, the Court could still bar the courthouse doors.

Justice Thomas wrote the principal dissent, joined by Justices Breyer, Sotomayor, and Kagan. As Justice Kagan explained in an additional dissent, the ruling “transforms standing law from a doctrine of judicial modesty into a tool of judicial aggrandizement.” Indeed, Congress specifically recognized new harms and provided a new cause of action to enforce them, yet the Court nullified these democratically-enacted rights and remedies based on its crabbed view that the harms are not sufficiently “concrete.”

What Comes Next?

This could pose problems for a future Congress that wanted to get serious about recognizing and empowering us to seek accountability for the unique and new harms caused by modern data misuse practices, potentially including harms arising from decision-making based upon machine learning and artificial intelligence. Congress will need to make a record of the grievous injuries caused by out-of-control data processing by corporations who care more for their profits than our privacy and expressly tie whatever consumer protections it creates to those harms and be crystal clear about how those harms justify a private right of action.

The Court’s opinion does provide some paths forward, however. Most importantly, the Court expressly confirmed that intangible harms can be sufficiently concrete to bring a lawsuit. Doing so, the Court rejected the cynical invitation from Facebook, Google, and tech industry trade groups to deny standing for all but those who suffered a physical or economic injury. Nonetheless, we anticipate that companies will try to use this new decision to block further privacy litigation. We will work to make sure that future courts don’t overread this case.

The court also recognized that the risk of future harm could still be a basis for injunctive relief—so while you cannot seek damages, you don’t have to wait until you are denied credit or a job or a home before seeking protection from a court from known bad data practices. Finally, as the dissent observed, the majority’s standing analysis only applies in federal court; state courts applying state laws can go much further in recognizing harms and adjudicating private causes of action because the federal "standing" doctrine does not apply. The good work being done to protect privacy in states across the country is now all-the-more important.

But, overall, this is a bad day for privacy. We have been cheered by the Supreme Court’s increasing recognition, when ruling on law enforcement activity, of the perils of modern data collection practices and the vast difference between current and previous technologies. Yet now the Court has failed to recognize that Congress must have the power to proactively protect us from the risks created when private companies use modern databases to vacuum up our personal information, and use data-based decision-making to limit our access to life’s necessities. This decision is a big step backwards for empowering us to require accountability from today’s personal data-hungry tech giants. Let's hope that it is merely an anomaly. We need a Supreme Court that understands and takes seriously the technology-fueled issues facing us in the digital age.    

Decoding California's New Digital Vaccine Records and Potential Dangers

Fri, 06/25/2021 - 8:10pm

The State of California recently released what it calls a “Digital COVID-19 Vaccine Record.” It is part of that state’s recent easing of public health rules on masking within businesses. California’s new Record is a QR code that contains the same information as is on our paper vaccine cards, including name and birth date. We all want to return to normal freedom of movement while keeping our communities safe. But we have two concerns with this plan:

First, with minimal effort, businesses could use the information in the vaccination record to track the time and place of our comings and goings, pool that information with other businesses, and sell these dossiers of our movements to the government. We shouldn’t have to submit to a new surveillance technology that threatens pervasive tracking of our movements in public places to return to normal life.

Second, we’re concerned that the Digital Vaccine Record might become something that enables a system of Digital Vaccine Bouncers that limit access to life’s necessities and amplify inequities for people who legitimately cannot get a vaccine. It’s good that California has not, at least so far, created any infrastructure to make it easy to turn vaccination status into a surveillance system that magnifies inequities.

We do not object per se to another feature of California’s new Digital Vaccine Record: the display on one’s phone screen, in human-readable form, of the information on one’s paper vaccine card. Some people may find this to be a helpful way to store their vaccine card and present it to businesses. Unlike a QR code, such a digital record system does not readily lend itself to the automated collection, retention, use, and sharing of our personal information. In terms of fraud, there are laws in place where it is a crime to present a false vaccination record already, but there is little accountability for our data.

To better understand what California has done, and why we have objections to including a digital personal health record used for screening at all manner of places, we’ll need to go over a brief summary of the state’s new public health rules, and then take a deep dive into the technology.

What Did California Do?

In mid-June, California announced a change to the state’s rules on masking in public places: businesses may now allow fully vaccinated people to forego masks. But, businesses must continue to require unvaccinated people to wear masks. To comply with these rules there are three options: require all customers to wear a mask; rely on an honor system; or implement a vaccine verification system.

Soon after, California rolled out its Digital Vaccine Record. This is intended to be a vaccine verification system that businesses may use to distinguish vaccinated from unvaccinated customers for purposes of masking. The Record builds on SMART Health Cards. California enables vaccinated people to obtain their digital Record through a web portal.

The new Record displays two sets of information. First, it shows the same information as a paper vaccine card: name, date of birth, date of vaccinations, and vaccine manufacturer. Second, it has a QR code that makes the same facts readable by a QR scanner. According to Reuters, an unnamed nonprofit group will soon launch an app that businesses can use to scan these QR codes.

So, What Does the Digital Vaccine Record QR Code Entail?

EFF looked under the hood. We generated a QR code based on this walkthrough for SMART Health Cards. Others might also use the project’s developer portal to generate a QR code. When we used a QR scanner on the QR code we generated, we revealed this blob of text:

shc:/56762909524320603460292437404460312229595326546034602925407728043360287028647167452228092863336138625905562441275342672632614007524325773663400334404163424036744177447455265942526337643363675944416729410324605736010641293361123274243503696800275229652…[shortened for brevity]

Okay, What Does That Mean?

Starting with the shc:/, that is the scheme for the SMART Health Cards framework based on W3C Verifiable Credentials. That framework is an open standard to share claims over health information about an individual as issued by an institution, such as a doctor’s office or state immunization registry.

What Are the Rest of Those Numbers?

They are a JSON Web Signature (JWS or AKA a signed JSON Web Token). This is a form of transmittable content secured with digital signatures. A JWS has three parts: header, payload, and signature.

Notably, this is encoded and not encrypted data. Encoding data formats it in a way that is easily transmitted using a common format. For example, the symbol “?” in ASCII encoding would be the “63” decimal value. By itself, 63 just looks like a number. But if you knew this was an ASCII code, you would be able to easily decode it back to a question mark. In this case, the JWS encoded payload (via base64URL encoding) is minified (white space removed), compressed, and signed according to specifications by a health authority. Encrypted data, on the other hand, is unreadable except to a person who knows how to decrypt it back into a readable form. Since this record is created to be read by anyone, it can’t be encrypted.

After decoding, you will get something that looks like this:

[Split up with headers for readability]

Signature

eyJ6aXAiOiJERUYiLCJhbGciOiJFUzI1NiIsImtpZCI6IlNjSkh2eEVHbWpGMjU4aXFzQlU0OUVlWUQwVzYwdGhWalRmNlphYVpJV0EifQ.3VJNj9MwEP0rq-HaJnEKt…[shortened for brevity]

Header

{"zip":"DEF","alg":"ES256","kid":"ScJHvxEGmjF258iqsBU49EeYD0W60thVjTf6ZaaZIWA"}

Payload

{"iss":"https://smarthealth.cards/examples/issuer","nbf":1620992383.218,"vc":{"@context":["https://www.w3.org/2018/credentials/v1"],"type":["VerifiableCredential","https://smarthealth.cards#health-card","https://smarthealth.cards#immunization","https://smarthealth.cards#covid19"],"credentialSubject":{"fhirVersion":"4.0.1","fhirBundle":{"resourceType":"Bundle","type":"collection","entry":[{"fullUrl":"resource:0","resource":{"resourceType":"Patient","name":[{"family":"Anyperson","given":["John","B."]}],"birthDate":"1951-01-20"}},{"fullUrl":"resource:1","resource":{"resourceType":"Immunization","status":"completed","vaccineCode":{"coding":[{"system":"http://hl7.org/fhir/sid/cvx","code":"207"}]},"patient":{"reference":"resource:0"},"occurrenceDateTime":"2021-01-01","performer":[{"actor":{"display":"ABC General Hospital"}}],"lotNumber":"0000001"}},{"fullUrl":"resource:2","resource":{"resourceType":"Immunization","status":"completed","vaccineCode":{"coding":[{"system":"http://hl7.org/fhir/sid/cvx","code":"207"}]},"patient":{"reference":"resource:0"},"occurrenceDateTime":"2021-01-29","performer":[{"actor":{"display":"ABC General Hospital"}}],"lotNumber":"0000007"}}]}}}}

In the payload displayed immediately above, you now can see the plaintext of the blob we originally saw upon the scan of the QR code we generated. It includes immunization status, where the vaccination occurred, date of birth, when the vaccination occurred, and the lot number for the vaccine batch. Basically, this is all the information that would be on your paper CDC card.

Can Someone Forge a QR-based Digital Vaccine Record?

Anyone can “issue” a digital health card. You can create one with a little programming knowledge, as just explained. Like the one immediately below, which is associated with blobs of data above.

Suppose you lost your QR Code and had the decoded information saved somewhere. For example, if you had scanned the QR code to an SHC validator app, you could recreate another QR code from the decoded information. There are walk-throughs available that explain how to create and validate QR codes.

California places some limits on access and generation of QR codes in its new Digital Vaccine Record. For example, these QR codes must be tied to either the email address or phone number of the individual who received the vaccine. Also, when a person requests a Record with a QR code, the California system generates a URL through which that person can access their Record, then that URL expires after 24 hours.

California has not identified other security and anti-forgery features. The only encryption or secure transfer is the public health authority signing the record with their private key. The QR code itself is not encrypted; someone who plans to use it should be aware of that. As to forgery risk, since anyone can make a QR code like the one discussed above, it is up to the operator of the QR scanner to check the public key of the signed data to make sure it is from a valid public health authority.

How Can This Hurt Us?

Context Switching for Data

Even though the Digital Vaccine Record’s QR code is a digital mirror to your CDC card (plus the authority’s signature), the companies that process your Record can change the context of protection and use. For example, CLEAR Health Pass allows you to record your health QR code into their app. With companies like CLEAR that plan to become our digital wallets, we have to consider the risks that come with storing your health credentials with others.

You also run the risk that the scanned set of data will get stored, shared, and used in an unexpected or even nefarious way. For example, some bars scan IDs at the door to ensure patrons are 21--and also collect the information on the ID and share it with other bars. If a scanner can quickly check a simple fact on a barcode or QR code (like years since birth or vaccination status), it can also store that fact,as well as all other information embedded in the code (like name and date of birth), and surrounding data (like time and location). In this case, just as a doorkeeper generally will not copy the information on your paper vaccination card, a doorkeeper should not copy the information on your digital vaccination card. Yet no laws in California currently dictate that point to those who are scanning these health QR Codes. It is also unclear what the “official” verifying app will do and what privacy safeguards it will have.

Likewise, while California apparently intends to allow businesses to use these Records to require unvaccinated patrons to wear a mask, nothing stops businesses from also using these Records to deny admission to unvaccinated patrons. At that point, these Records would become digital vaccine bouncers, which EFF opposes.

National Identification Footholds

With no federal data privacy law, we must assume that when companies process our data, no matter how benign the information or purpose may seem, they will take it down the most exploitative road possible.

The QR code in California’s Digital Vaccine Record is a digital identity platform with more data, which can become part of the groundwork for National ID systems. EFF has long opposed such systems, which in one central government repository would store all manner of information about our activities. EFF raised this concern last year when opposing “vaccine passports.” We are now seeing these discussions occur in NY State with the Excelsior Pass and in the U.K., where the company the government hired to help create a vaccine passport has suggested redeploying such infrastructure into a national identification system. With no federal data privacy law, we must assume that when companies process our data, no matter how benign the information or purpose may seem, they will take it down the most exploitative road possible.

Bottom Line for Digital Vaccine Records

California’s approach is more welcome than state-sponsored vaccine passports. Users don’t need to download an app, as in New York State. It’s comfortable knowing that if something happened to your paper card, you can access a digital copy. The open standard allows independent study to understand what is in that QR Code, which helps to ensure that users know the potential risks and scenarios that can happen with their health data.

Still, we wish California had skipped the QR code. Also, we want more safeguards set, similar to those in the current bill in the NY State Senate that protects COVID-19 related health data, along with any sort of data processing expansion that is occuring due to this pandemic. Establishing data protections now, when we are in crisis, would help ensure privacy in future use of such technologies, during healthier times and in any future health crisis.

[VISUAL] The Overlapping Infrastructure of Urban Surveillance, and How to Fix It

Thu, 06/24/2021 - 2:50pm

Between the increasing capabilities of local and state police, the creep of federal law enforcement into domestic policing, the use of aerial surveillance such as spy planes and drones, and mounting cooperation between private technology companies and the government, it can be hard to understand and visualize what all this overlapping surveillance can mean for your daily life. We often think of these problems as siloed issues. Local police deploy automated license plate readers or acoustic gunshot detection. Federal authorities monitor you when you travel internationally.

But if you could take a cross-section of the average city block, you would see the ways that the built environment of surveillance—its physical presence in, over, and under our cities—makes this an entwined problem that must be combatted through entwined solutions.


Thus, we decided to create a graphic to show how—from overhead to underground—these technologies and legal authorities overlap, how they disproportionately impact the lives of marginalized communities, and the tools we have at our disposal to halt or mitigate their harms.

Going from Top to Bottom:  1. Satellite Surveillance: 

Satellite photography has been a reality since the 1950s, and at any given moment there are over 5,000 satellites in orbit over the Earth—some of which have advanced photographic capabilities. While many are intended for scientific purposes, some satellites are used for reconnaissance by intelligence agencies and militaries. There are certainly some satellites that may identify a building or a car from its roof, but it’s unlikely that we could ever reach the point where pictures taken from a satellite would be clear enough or could even be the correct angle to run through face recognition technology or through an automated license plate reader.

Satellites can also enable surveillance by allowing governments to intercept or listen in on data transmitted internationally. 

2. Internet Traffic Surveillance

Government surveillance of internet traffic can happen in many ways. Through programs like PRISM and XKEYSCORE, the U.S. National Security Agency (NSA) can monitor emails as they move across the internet, browser and search history, and even keystrokes as they happen in real time. Much of this information can come directly from the internet and telecommunications companies that consumers use, through agreements between these companies and government agencies (like the one the NSA shares with AT&T) or through warrants or orders granted by a judge, including those that preside over the Foriegn Intelligence Surveillance Court (FISC). 

Internet surveillance isn’t just the domain of the NSA and international intelligence organizations; local law enforcement are just as likely to approach big companies in an attempt to get information about how some people use the internet. In one 2020 case, police sent a search warrant to Google to see who had searched the address of an arson victim to try to identify a suspect. Using the IP addresses Google furnished of users who conducted that search, police identified a suspect and arrested him for the arson. 

How can we protect our internet use? FISA reform is one big one. Part of the problem is also transparency: in many instances it's hard to even know what is happening behind the veil of secrecy that shrouds the American surveillance system. 


3. Cellular Communications (Tower) Surveillance

Cell phone towers receive information from our cell phones almost constantly, such as the device’s location, metadata like calls made and the duration of each call, and the content of unencrypted calls and text messages. This information, which is maintained by telecom companies, can be acquired by police and governments with a warrant. Using encrypted communication apps, such as Signal, or leaving your cell phone at home when attending political demonstrations are some ways to prevent this kind of surveillance. 

4. Drones

Police departments and other local public safety agencies have been acquiring and deploying drones at a rapid rate. This is in addition to federal drones used both overseas and at home for surveillance and offensive purposes. Whether at the border or in the largest U.S. cities, law enforcement claim drones are an effective method for situational awareness or for use in situations too dangerous for an officer to approach. The ability for officers to use a drone in order to keep their distance was one of the major reasons police departments around the country justified the purchase of drones as a method of fighting the COVID-19 pandemic.

However, drones, like other pieces of surveillance equipment, are prone to “mission creep”: the situations in which police deploy certain technologies often far overreach their intended purpose and use. This is why drones used by U.S. Customs and Border Protection, whose function is supposedly to monitor the U.S. border, were used to surveil protests against police violence in over 15 cities in the summer of 2020, many hundreds of miles from the border.

It’s not only drones that are in the skies above you spying on protests and people as they go about their daily lives. Spy planes, like those provided by the company Persistent Surveillance Systems, can be seen buzzing above cities in the United States. Some cities, however, like Baltimore and St. Louis, have recently pulled the plug on these invasive programs.


Drones flying over your city could be at the behest of local police or federal agencies, but as of this moment, there are very few laws restricting when and where police can use drones or how they can acquire them. Community Control Over Police Surveillance, or CCOPS ordinances, are one such way residents of a city can prevent their police from acquiring drones or restrict how and when police can use them. The Fourth Circuit court of appeals has also called warrantless use of aerial surveillance a violation of the Fourth Amendment.

5. Social Media Surveillance

Federal, local, and state governments all conduct social media surveillance in a number of different ways—from sending police to infiltrate political or protest-organizing Facebook groups, to the mass collection and monitoring of hashtags or geolocated posts done by AI aggregators.

There are few laws governing law enforcement use of social media monitoring. Legislation can curb mass surveillance of our public thoughts and interactions on social media by requiring police to have reasonable suspicion before conducting social media surveillance on individuals, groups, or hashtags. Also, police should be barred from using phony accounts to sneak into closed-access social media groups, absent a warrant.


6. Cameras 

Surveillance cameras, either public or private, are ubiquitous in most cities. Although there is no definitive proof that surveillance cameras reduce crime, cities, business districts, and neighborhood associations continue to establish more cameras, and equip those cameras with increasingly invasive capabilities.

Face recognition technology (FRT), which over a dozen cities across the United States have banned government agencies from using, is one such invasive technology. FRT can use any image—taken in real-time or after-the-fact—and compare it to pre-existing databases that contain driver’s license photos, mugshots, or pre-existing CCTV camera footage. FRT has a long history of misidentifying people of color and trans* and nonbinary people, even leading to wrongful arrests and police harassment. Other developing technology, such as more advanced video analytics, can allow users to search footage accumulated from hundreds of cameras by things as specific as “pink backpack” or “green hair.”

Networked surveillance cameras can harm communities by allowing police, or quasi-governmental entities like business improvement districts, by recording how people live their lives, who they communicate with, what protests they attend, and what doctors or lawyers they visit. One way to lessen the harm surveillance cameras can cause in local neighborhoods is through CCOPS measures that can regulate their use. Communities can also band together to join more than a dozen cities around the country that have banned government use of FRT and other biometrics.

Take action

TELL congress: END federal use of face surveillance

7. Surveillance of Cell Phones

Cell phone surveillance can happen in a number of ways, based on text messages, call metadata, geolocation, and other information collected, stored, and disseminated by your cell phone every day. Government agencies at all levels, from local police to international intelligence agencies, have preferred methods of conducting surveillance on cell phones. 


For instance, local and federal law enforcement have been known to deploy devices known as Cell-Site Simulators or “stingrays,” which mimic cell phone towers your phone automatically connects to in order to harvest information from your phone like identifying numbers, call metadata, the content of unencrypted text messages, and internet usage. 

Several recent reports revealed that the U.S. government purchases commercially available data obtained from apps people downloaded to their phones. One report identified the Department of Defense’s purchase of sensitive user data, including location data, from a third-party databroker of information obtained through apps targeted at Muslims, including a prayer app with nearly 100 million downloads. Although the government would normally need a warrant to acquire this type of sensitive data, purchasing the data commercially allows them to evade constitutional constraints.

One way to prevent this kind of surveillance from continuing would be to pass the Fourth is Not For Sale Act, which would ban the government from purchasing personal data that would otherwise require a warrant. Indiscriminate and warrantless government use of stingrays is also currently being contested in several cities and states and a group of U.S. Senators and Representatives have also introduced legislation to ban their use without a warrant.  CCOPS ordinances have proven a useful way to prevent police from acquiring or using Cell-Site Simulators. 


8. Automated License Plate Readers

Automated license plate readers (ALPRs) are high-speed, computer-controlled camera systems that are typically mounted on street poles, streetlights, highway overpasses, mobile trailers, or attached to police squad cars. ALPRs automatically capture all license plate numbers that come into view, along with the location, date, and time of the scan. The data, which includes photographs of the vehicle and sometimes its driver and passengers, is then uploaded to a central server.

Taken in the aggregate, ALPR data can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity. ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of worship.

ALPRs can also be inaccurate. In Colorado, police recently pulled a Black family out of their car at gunpoint after an ALPR misidentified their vehicle as one that had been reported stolen. Too often, technologies like ALPRs and face recognition are used not as an investigative lead to be followed up and corroborated, but as something police rely on as a definitive accounting of who should be arrested.


Lawmakers should better regulate this technology by limiting “hotlists” to only cars that have been confirmed stolen, rather than vehicles labeled as “suspicious”, and to limit retention of ALPR scans of cars that are not hotlisted.

9. Acoustic Gunshot Detection

Cities across the country are increasingly installing sophisticated listening devices on street lights and the sides of buildings intended to detect the sound of gunshots. Acoustic gunshot detection, like the technology sold by popular company ShotSpotter, detects loud noises, triangulates where those noises came from, and sends the audio to a team of experts who are expected to determine if the sound was a gunshot, fireworks, or some other noise.

Recent reports have shown that the number of false reports generated by acoustic gunshot detection may be much higher than previously thought. This can create dangerous situations for pedestrians as police arrive at a scene armed and expecting to encounter someone with a gun.

Even though aimed at picking up gunshots, this technology also captures human voices, at least some of the time. In at least two criminal cases, the People v. Johnson (CA) and Commonwealth v. Denison (MA), prosecutors sought to introduce as evidence audio of voices recorded on an acoustic gunshot detection system. In Johnson, the court allowed this. In Denison, the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act.

While the accuracy of the technology remains a problem, one way to mitigate its harm is for activists and policymakers to work to ban police and prosecutors from using voice recordings collected by gunshot detection technology as evidence in court.

10. Internet-Connected Security Cameras

Popular consumer surveillance cameras like Amazon’s Ring doorbell camera are slowly becoming omnipresent surveillance networks in the country. Unlike traditional surveillance cameras which may back up to a local drive in the possession of the user, the fact that users of internet connected security cameras do not store their own footage makes that footage more easily accessible to police. Often police can bypass users altogether by presenting a warrant directly to the company.


These cameras are ubiquitous but there are ways we can help blunt their impact on our society. Opting in to encrypt your Amazon Ring footage and opting out of seeing police footage requests on the Neighbors app are two ways to ensure police have to bring a warrant to you, rather than Amazon, if they think your camera may have witnessed a crime.

11. Electronic Monitoring

Electronic monitoring is a form of digital incarceration, often in the form of a wrist bracelet or ankle “shackle” that can monitor a subject’s location, and sometimes their blood alcohol level.

Monitors are commonly used as a condition of pretrial release, or post-conviction supervision, like probation or parole. They are sometimes used as a mechanism for reducing jail and prison populations. Electronic monitoring has also been used to track juveniles, immigrants awaiting civil immigration proceedings, and adults in drug rehabilitation programs. 

Not only does electronic monitoring impose excessive surveillance on people returning home from incarceration, but it also hinders their ability to successfully transition back into the community. Additionally, there is no concrete evidence that electronic monitoring reduces crime rates or recidivism.

12. Police GPS Tracking

One common method that police attempt to track people suspected of criminal activity is GPS monitors placed underneath a person’s vehicle. Before the 2012 U.S. Supreme Court decision in United States v Jones, which ruled that a GPS monitor on a car constituted a search under the Fourth Amendment, and the 2013 Katzin decision in the Third Circuit which ruled, once and for all, that police need a warrant in order to install a GPS device, police often used these devices with few restrictions.

However, recent court filings indicate that law enforcement believes that warrantless use of GPS tracking devices at the border is fair game. EFF currently has a pending Freedom of Information Act lawsuit to uncover CBP’s and U.S. Immigration and Customs Enforcement’s (ICE) policies, procedures, and training materials on the use of GPS tracking devices.


13. International Internet Traffic Surveillance

Running under ground and under the oceans are thousands of miles of fiber optic cable that transmit online communications between countries. Originating as telegraph wires running under the ocean, these highways for international digital communication are now a hotbed of surveillance by state actors looking to surveil chatter abroad and at home. The Associated Press reported in 2005 that the U.S. Navy had sent submarines with technicians to help tap into the “backbone of the internet.” These cables make landfall at coastal cities and towns called “landing areas” like Jacksonville, Florida and Myrtle Beach, South Carolina and towns just outside of major cities like New York, Los Angeles, San Diego, Boston, and Miami.


How do we stop the United States government from tapping into the internet’s main arteries? Section 702 of the Foreign Intelligence Surveillance Act allows for the collection and use of digital communications of people abroad, but often scoops up communications of U.S. persons when they talk to friends or family in other countries. EFF continues to fight Section 702 in the court in hopes of securing communications that travel through these essential cables.

Take action

TELL congress: END federal use of face surveillance

Now Is The Time: Tell Congress to Ban Federal Use of Face Recognition

Thu, 06/24/2021 - 2:48pm

Cities and states across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, New Orleans to Minneapolis, elected officials and activists know that face surveillance gives police the power to track us wherever we go, disproportionately impacts people of color, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in first amendment protected activities. Even Amazon, known for operating one of the largest video surveillance networks in the history of the world, extended its moratorium on selling face recognition to police.


Now, Congress must do its part. We’ve created a campaign that will easily allow you to contact your elected federal officials and tell them to co-sponsor the Facial Recognition and Biometric Technology Moratorium Act.

Take action

TELL congress: END federal use of face surveillance

Police and other government use of this technology cannot be responsibly regulated. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. 

Face surveillance also disproportionately hurts vulnerable communities. Last year, the New York Times published a long piece on the case of Robert Julian-Borchak Williams, who was arrested by Detroit police after face recognition technology wrongly identified him as a suspect in a theft case.  The ACLU filed a lawsuit on his behalf against the Detroit police. 

The problem isn’t just that studies have found face recognition disparately inaccurate when it comes to matching the faces of people of color. The larger concern is that law enforcement will use this invasive and dangerous technology, as it unfortunately uses all such tools, to disparately surveil people of color. 

Williams and two other Black men (Michael Oliver and Nijeer Parks)  have garnered the attention of national media after face recognition technology led to them being falsely arrested by police. How many more have already endured the same injustices without the media’s spotlight? These incidents show another reason why police cannot be trusted with this technology: a piece of software intended to identify investigative leads is often used in the field to determine who should be arrested without independent officer vetting.

This federal ban on face surveillance would apply to increasingly powerful agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Patrol. The bill would ensure that these agencies cannot use this invasive technology to track, identify, and misidentify millions of people.

Tell your Senators and Representatives they must co-sponsor and pass the Facial Recognition and Biometric Technology Moratorium Act. It was recently introduced by Senators Edward J. Markey (D-Mass.), Jeff Merkley (D-Ore.), Bernie Sanders (I-Vt.), Elizabeth Warren (D-Mass.), and Ron Wyden (D-Ore.), and by Representatives Pramila Jayapal (WA-07), Ayanna Pressley (MA-07), and Rashida Tlaib (MI-13).

This important bill would be a critical step to ensuring that mass surveillance systems don’t use your face to track, identify, or harm you. The bill would ban the use of face surveillance by the federal government, as well as withhold certain federal funds from local and state governments that use the technology.  That’s why we’re asking you to insist your elected officials co-sponsor the Facial Recognition and Biometric Technology Moratorium Act, S.2052 in the Senate.

Take action

TELL congress: END federal use of face surveillance

How Big ISPs Are Trying to Burn California’s $7 Billion Broadband Fund

Wed, 06/23/2021 - 3:29pm

A month ago, Governor Newsom announced a plan to invest $7 billion of federal rescue funds and state surplus dollars to be mostly invested into public broadband infrastructure meant to serve every Californian with affordable access to infrastructure ready for 21st century demands. In short, the proposal would empower the state government, local governments, cooperatives, non-profits, and local private entities to utilize the dollars to build universal 21st century access. With that level of money, the state could end the digital divide—if invested correctly.

But, so far, industry opposition from AT&T and cable have successfully sidelined the money—as EFF warned earlier this month. Now, they’re attempting to reshape how the state spends a once-in-a-generation investment of funds to eliminate the digital divide into wasteful spending and a massive subsidy that would go into the industry’s hands. Before we break down the woefully insufficient industry alternative proposals that are circulating in Sacramento, it is important we understand the nature of California’s broadband problem today, and why Governor Newsom’s proposal is a direct means of solving it.

Industry’s Already Shown Us How Profit-Driven Deployment Leaves People Behind

This cannot be emphasized enough, but major industry players are discriminating against communities that would be profitable to fully serve in the long term. Why? These huge companies have opted to expand their short-term profits through discriminatory choices against the poor. That’s how California became the setting for a stark illustration of the digital divide in the pandemic: a picture of little girls doing homework in a fast food parking lot so they could access the internet. That was not in a rural market, where households are more spaced out. That was Salinas, California, a city with a population of 155,000+ people at a density of 6,490 people per square mile. There was no good reason why those little kids didn’t have cheap, fast internet at home. We should disabuse ourselves of the notion that any industry subsidy will change how they approach the business of deploying broadband access.

 

From https://twitter.com/kdeleon/status/1299386969873461248

And in the lack of meaningful digital redlining regulation, it is perfectly natural for industry to opt to discriminate against low-income neighborhoods because of the pressures to deliver fast profits to investors. It is why dense, urban markets that would be profitable to serve, such as Oakland and Los Angeles, have a well-documented and thoroughly studied digital redlining problem. Research shows that it’s mostly Black and brown neighborhoods that are skipped over for 21st century network investments. It is also the same reason why people in rural California suffer from systemic underinvestment in networks that led to one of the largest private telecom bankruptcies in modern times—impacting millions of Californians. If the profit is not fast enough, they will not invest, and throwing more government money at these short term focused companies will never fix the problem.

Big internet service providers have shown us again and again that they will not invest in areas that present an unattractive profit rate for their shareholders. On average, it takes a network about five years to fully deploy, and new networks were first deployed in these companies’ favored areas well over a decade ago. No amount of government money from a one-time capital expenditure standpoint will change their estimations of who is a suitable long-term payer for their private products and services. Their conclusions are hard-wired into expectations driven by Wall Street investors for their ability to pay dividends and keep delivering consistent profits certain households will deliver to them. Their priorities will not change due to more money from the state. Even with more aggressive regulation to address profitable yet discriminated against areas, the private industry is not able to address areas that will yield zero profit to provide service.

The only means to reach 100% universal access with 21st century fiber broadband at affordable prices is to promote locally held alternatives and aggressively invest in public broadband infrastructure. Some rural communities can only be fully served by a local entity that can take a 30-to-40-year debt investment strategy that is not subject to pressure from far-off investors to deliver profits. That is exactly how we got electricity out to rural communities. Broadband being an essential service, the expectations of consistent revenue from rural residents to sustain their own networks align well with making long-term bets—as envisioned by Governor Newsom’s proposal to create a Loan Loss Revenue Reserve Account. This account will enable long-term low-interest infrastructure financing. And, most importantly, it’s only possible to deliver affordable access for low-income users in many places if we decouple the profit motive with the provisioning of this essential service. For proof of this, look no further than Chattanooga, Tennessee, where 17,700 households with low-income students will enjoy 10 years of free 100/100mbps fiber internet access at the zero profit cost of $8.2 million. 

If we want to make 21st century-internet something everyone can access regardless of their socioeconomic status and location, we need to use all the options available to us. The private market has its role and importance. But truly reaching 100% access is not possible without a strong public model to cover those who are most difficult to reach.

What Industry Is Actually Asking Sacramento To Do With Our Money

The suggestions the cable industry and AT&T are making to Sacramento right now fail us twice over. They will not actually solve the problem our state faces. They will also set us down a path of perpetual industry subsidization and sabotage of the public model. These suggestions seem focused on blocking the state government from pushing middle-mile fiber deep into every community, which is a necessary pre-condition to ensure a local private or public solution is financially feasible. Still, the mere existence of some connectivity in or near an area does not mean there is the capacity to deliver 21st century access. Solving that problem requires fiber. And it’s the lack of accessible fiber (predominantly in rural areas) that prevents local solutions from taking root in many places, even those that are motivated. Industry has no solution to offer in these places, because it has always avoided investing in those areas.

Let’s start with cable companies' specific suggestions. This industry has a very long history of opposing municipal fiber to preserve high-speed monopolies. And so their suggested change to Governor Newsom’s plan comes as no surprise because all it would do is jam all the funding into the existing California Advanced Services Fund (CASF), which they supported in 2017. CASF has utterly failed to make significant progress in eliminating the digital divide. EFF has detailed why California’s broadband program is in desperate need of an update and has sponsored legislation to adopt a 21st century infrastructure standard in the face of industry opposition—which prevented needed changes to CASF in the height of the pandemic, with an assist from California’s Assembly.

There is no saving grace for the existing broadband infrastructure program. CASF has spent an obscene amount of public money on obsolete slow connections that were worthless during the pandemic due to legislative restrictions the industry sought. Its current rules also make large swathes of rural California ineligible for broadband investments, and it prioritizes private industry investments by blocking most local government bidders. It is no surprise cable suggests we spend another $7 billion on that failed experiment.

Arguably the worst suggestion the cable industry makes to Governor Newsom’s plan is to eliminate the long-term financing program that would help local governments access the bond market, and instead cram it into the failed CASF program. Doing that would mean local communities would be barred from replacing 1990s-era connections with fiber, and continue to reward the industry’s strategy of discriminating against low-income Californians and prioritizing the wealthy. That would effectively destroy the ability of local governments to finance community-wide upgrades, which is a core strategy of rural county governments left to deal with the wake of the Frontier Communications bankruptcy. By sabotaging the long-term financing program, cable ensures local governments have little chance of financing their own networks—and that is the entire point. If Sacramento wants to see everyone in rural California and underserved cities connected, then community networks must be community-wide to make long-term financing the cost of the entire network to all people affordable. Forcing the public sector to offset the discriminatory choices of industry only rewards that discrimination and makes these community solutions financially infeasible.

AT&T, which has never lacked humility when talking to Sacramento legislators, has gone as far to say in a letter to the legislature that building out capacity to every community somehow prevents local last-mile solutions from taking root. That’s a bogus argument. If you don’t have capacity at an affordable rate provisioned to a community, there can never be a local solution. If that capacity is already available to rural communities today at a price point that enables local solutions, then we would be seeing it in rural communities today. So, unless AT&T is planning to show the state and local communities exactly where—and at what price—it is offering middle mile fiber to rural communities, legislators should just ignore this obvious misdirection.

What is also particularly frustrating to read in AT&T’s letter is the argument that barely anyone needs infrastructure in California to engage in remote work, telehealth, and distance education. The letter goes so far as to say only 463,000 households need access.

This just is not true. For starters, AT&T’s estimate is premised on the assumption that an extremely slow 25/3 mbps broadband connection is more than enough to use the internet today. That standard was established in 2015, long before the pandemic reshaped access needs. It is effectively useless today as a metric to assess infrastructure, because it obscures the extent to which the industry has under-invested in 21st century ready access. No one builds a network today to just deliver 25/3 mbps. Doing so would be a gigantic waste of money. Anything new built today is built with fiber, without exception. The appropriate assessment of the state’s communications infrastructure should boil down to one question: who has fiber?

The reality, per state data, is that just to meet the Governor’s minimum metric of 100 mbps download the number of households that need support rises by 100,000s above AT&T’s estimate. And if we want 21st century fiber-based infrastructure access throughout the state, as envisioned by President Biden and Governor Newsom's proposal, we have millions of homes to connect—something that can be done with a $7 billion investment.

The choice for Sacramento should be easy. An investment at the size of $7 billion that will enable high-capacity fiber infrastructure throughout the state will begin a 21st century access transition for all Californians who lack it today. Adopting AT&T’s vision of narrowly funneling the funds to an extremely limited number of Californians while shoveling the rest in their coffers as subsidies will build nothing.

Standing With Security Researchers Against Misuse of the DMCA

Wed, 06/23/2021 - 12:04pm

Security research is vital to protecting the computers upon which we all depend, and protecting the people who have integrated electronic devices into their daily lives. To conduct security research, we need to protect the researchers, and allow them the tools to find and fix vulnerabilities. The Digital Millennium Copyright Act’s anti-circumvention provisions, Section 1201, can cast a shadow over security research, and unfortunately the progress we’ve made through the DMCA rule-making process has not been sufficient to remove this shadow.

DMCA reform has long been part of EFF’s agenda, to protect security researchers and others from its often troublesome consequences. We’ve sued to overturn the onerous provisions of Section 1201 that violate the First Amendment, we’ve advocated for exemptions in every triennial rule-making process, and the Coders Rights Project helps advise security researchers about the legal risks they face in conducting and disclosing research.

Today, we are honored to stand with a group of security companies and organizations that are showing their public support for good faith cybersecurity research, standing up against use of Section 1201 of the DMCA to suppress the software and tools necessary for that research. In the statement below, the signers have united to urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research, and to urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research.

The statement in full:

We the undersigned write to caution against use of Section 1201 of the Digital Millennium Copyright Act (DMCA) to suppress software and tools used for good faith cybersecurity research. Security and encryption researchers help build a safer future for all of us by identifying vulnerabilities in digital technologies and raising awareness so those vulnerabilities can be mitigated. Indeed, some of the most critical cybersecurity flaws of the last decade, like Heartbleed, Shellshock, and DROWN, have been discovered by independent security researchers.

However, too many legitimate researchers face serious legal challenges that prevent or inhibit their work. One of these critical legal challenges comes from provisions of the DMCA that prohibit providing technologies, tools, or services to the public that circumvent technological protection measures (such as bypassing shared default credentials, weak encryption, etc.) to access copyrighted software without the permission of the software owner. 17 USC 1201(a)(2), (b). This creates a risk of private lawsuits and criminal penalties for independent organizations that provide technologies to researchers that can help strengthen software security and protect users. Security research on devices, which is vital to increasing the safety and security of people around the world, often requires these technologies to be effective.

Good faith security researchers depend on these tools to test security flaws and vulnerabilities in software, not to infringe on copyright. While Sec. 1201(j) purports to provide an exemption for good faith security testing, including using technological means, the exemption is both too narrow and too vague. Most critically, 1201(j)’s accommodation for using, developing or sharing security testing tools is similarly confined; the tool must be for the "sole purpose" of security testing, and not otherwise violate the DMCA’s prohibition against providing circumvention tools.

If security researchers must obtain permission from the software vendor to use third-party security tools, this significantly hinders the independence and ability of researchers to test the security of software without any conflict of interest. In addition, it would be unrealistic, burdensome, and risky to require each security researcher to create their own bespoke security testing technologies.

We, the undersigned, believe that legal threats against the creation of tools that let people conduct security research actively harm our cybersecurity. DMCA Section 1201 should be used in such circumstances with great caution and in consideration of broader security concerns, not just for competitive economic advantage. We urge policymakers and legislators to reform Section 1201 to allow security research tools to be provided and used for good faith security research In addition, we urge companies and prosecutors to refrain from using Section 1201 to unnecessarily target tools used for security research.

Bishop Fox
Bitwatcher
Black Hills Information Security
Bugcrowd
Cybereason
Cybersecurity Coalition
Digital Ocean
disclose.io
Electronic Frontier Foundation
Grand Idea Studio
GRIMM
HackerOne
Hex-Rays
iFixIt
Luta Security
McAfee
NCC Group
NowSecure
Rapid7
Red Siege
SANS Technology Institute
SCYTHE
Social Exploits LLC

Pages