EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 2 hours 24 min ago

EFF and Allies Urge Council of Europe to Add Strong Human Rights Safeguards Before Final Adoption of Flawed Cross Border Surveillance Treaty

Tue, 09/14/2021 - 3:11pm

EFF, European Digital Rights (EDRi), the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), and other civil society organizations have worked closely on recommendations to strengthen human rights protections in a flawed international cross border police surveillance treaty drafted by the Council of Europe (CoE). At a virtual hearing today before the CoE Parliamentary Assembly (PACE) Committee on Legal Affairs and Human Rights, EFF Policy Director for Global Privacy Katitza Rodriguez presented a summary of the concerns we and our partners have about the treaty’s weak privacy and human rights safeguards.

There is much at stake, as the draft Second Additional Protocol to the Budapest Convention on Cybercrime will reshape cross-border law enforcement data-gathering on a global scale. The Protocol’s objectives are to facilitate cross-border investigations between countries with varying legal systems and standards for accessing people’s personal information. In her testimony, the text of which is published in full below, Rodriguez highlighted key shortcomings in the Protocol, and recommendations for fixing them.

EFF Testimony and Statement to Committee on Legal Affairs and Human Rights, Parliamentary Assembly, Council of Europe

At the highest level, the current Protocol should establish clear and enforceable baseline safeguards in cross-border evidence gathering, but fails to do so. Though new police powers are mandatory, corresponding privacy protections are frequently optional, and the Protocol repeatedly defers to harmonised safeguards in an active attempt to entice states with weaker human rights records to sign on. The result is a net dilution of privacy and human rights on a global scale. But the right to privacy is a universal right. International law enforcement powers should come with detailed legal safeguards for privacy and data protection. When it comes to data protection, Convention 108+ should be the global reference. By its recommendations to the Council of Ministers, PACE has an opportunity to establish a commonly acceptable legal framework for international law enforcement that places privacy and human rights at its core.

Protecting Online Anonymity


Substantively, we have concerns regarding Article 7 of the Protocol, which permits direct access by law enforcement in one country to subscriber identity information held by a company in another country. In our opinion, Article 7 fails to provide, or excludes, critical safeguards contained in many national laws. For example, Article 7 does not include any explicit restrictions on targeting activities which implicate fundamental rights, such as freedom of expression or association, and prevents Parties from requiring foreign police to demonstrate that the subscriber data they seek will advance a criminal investigation.[1]

We are particularly concerned that Article 7’s explanatory text fails to acknowledge that subscriber data can be highly intrusive. Your IP address can tell authorities what websites you visit and what accounts you used. Police can also request the name and address associated with your IP address in order to link your identity to your online activity, and that can be used to learn deeply intimate aspects of your daily habits. Article 7’s identification power undermines online anonymity in a context that embraces legal systems with widely divergent approaches to criminal justice, including some governments that are autocratic in nature. The resulting threat to journalists, human rights defenders, politicians, political dissidents, whistleblowers and others is indefensible.

This is why we've urged PACE to remove Article 7 entirely from the text of the Protocol. States would still be able to access subscriber data in cross-border contexts, but would instead rely on Article 8, which includes more safeguards for human rights. If Article 7 is retained, we’ve urged for additional minimum safeguards, such as:

  • Ensuring that the explanatory text properly acknowledges that access to subscriber data can be highly intrusive.
  • Providing Parties with the option, at least, of requiring prior judicial authorization for requests made under Article 7.
  • Requiring Parties to establish a clear evidentiary basis for Article 7 requests.
  • Ensuring that Article 7 requests provide enough factual background to assesscompliance with human rights standards and protected privileges.
  • Requiring notification or consultation with a responding state for all Article 7 demands.
  • Requiring refusal of Article 7 requests when necessary to address lack of doublcriminality or protection of legal privileges.
  • Providing the ability to reserve Article 7 in a more nuanced and timely manner.
  • Ensuring that Article 7 demands include details regarding legal remedies and obligations for service provider refusal.
Raising the Bar for Data Protection


When it comes to Article 14’s data protection safeguards, we have asked that the Protocol be amended so that signatories may refuse to apply its most intrusive powers (Articles 6, 7 and 12) when dealing with any other signatory that has not also ratified Convention 108+. We also hope the Parliamentary Assembly will support the Committee of Convention 108’s mission, and remember (or take note) that the Committee of Ministers supports making Convention 108 the global reference for data protection, including in the implementation of this Protocol.

Article 14 itself falls short of modern data protection requirements and, in some contexts, will actively undermine emerging international standards. Two examples:

  • Fails to require independent oversight of law enforcement investigative activities. For example, many oversight functions can be exercised by government officials housed within the same agencies directing the investigations;
  • Article 14 limits the situations in which biometric data can be considered ‘sensitive and in need of additional protection despite a growing international consensus that biometric data is categorically sensitive.

But even with the weak standards contained in Article 14, signatories are explicitly permitted to bypass these safeguards through various mechanisms, none of which provide any assurance that meaningful privacy protections will be in place. For example, any two or more signatories can enter into an international data protection agreement that will supersede the safeguards outlined in Article 14. The agreement does not need to provide a comparable or adequate level of protection to the default rules.

Signatories can even adopt less protective standards in secret agreements or arrangements and continue to rely on the Protocol’s law enforcement powers. We have therefore recommended that the Protocol be amended to ensure a minimum threshold of privacy protection in Article 14, one which may be supplemented with more rigorous protections but cannot be replaced by weaker standards. This would also be done in a vein to avoid the fragmentation of privacy regimes.

Make Joint Investigative Team Limitations Explicit


Under Article 12, signatories can form joint investigative teams that can bypass core existing frameworks such as the MLAT regime when using highly intrusive cross-border investigative techniques or when transferring personal information between team members.

We have asked that the Protocol be amended so that some of its core intended limitations are made explicit. This is particularly important given that many teams may ultimately be operating with a higher level of informality and driven by police officers without input or supervision from other government bodies typically involved in

overseeing cross-border investigations. Specifically, we have asked that the Protocol (or, alternatively, the explanatory text) clearly and unequivocally state that participants in a joint investigative team must not take investigative measures within the territory of another participant in the team and that no participant may violate the laws of another participant of that team.

We also ask that the Protocol be amended so that Parties are obligated to involve their central authorities (and, preferably, the entity responsible for data protection oversight) in the formation and general operation of an investigative team, and that agreements governing investigative teams be made public except to the degree that doing so would threaten investigative secrecy or is necessary to achieve other important public interest objectives.

Read more on this topic:

 

 

 

Protestors Nationwide Rally to Tell Apple: "Don't Break Your Promise!"

Tue, 09/14/2021 - 2:06pm

Yesterday in San Francisco, Chicago, Boston, New York, and other cities across the U.S activists rallied in front of Apple stores demanding that the company fully cancel its plan to introduce surveillance software into its devices. In addition to protests at stores organized by EFF and Fight for the Future, EFF also took the message directly to Apple’s headquarters by flying a banner above the campus during its annual iPhone launch event today. 

The last time EFF held a protest at an Apple store, in 2016, it was to support the company’s strong stance in protecting encryption. That year, Apple challenged the FBI’s request to install a backdoor into its operating system. This year, in early August, Apple stunned its many supporters by announcing a set of upcoming features, intended to help protect children, that would create an infrastructure that could easily be redirected to greater surveillance and censorship. These features would pose enormous danger to iPhone users’ privacy and security, offering authoritarian governments a new mass surveillance system to spy on citizens. 

After public pushback in August, Apple announced earlier this month that its scanning program would be delayed. Protestors this week rallied to urge Apple to abandon its program and commit to protecting user privacy and security. Speakers included EFF Activist Joe Mullin and Executive Director Cindy Cohn.

Mullin told the crowd at the San Francisco protest how essential it was that Apple continue its commitment to protecting users: “From San Francisco to Dubai, Apple told the whole world that iPhone is all about privacy,” said Mullin. “But faced with government pressure, they caved. Now 60,000 users have signed a petition telling Apple they refuse to be betrayed.”

Holding signs that read “Don’t Scan My Phone” and “No Spy Phone,” protestors chanted “No 1984, no, Apple—no backdoor!" and “2-4-6-8, stand with users, not the state; 3-5-7-9, privacy is not a crime!”

“We can't be silent while Tim Cook and other Apple leaders congratulate themselves on their new products after they've signed on to a mass surveillance project,” said Mullin.  “No scanners on our phones!”


Apple has said that it will take additional time over the coming months to collect input about its child protection features. Later this month, EFF ​​hopes to begin that conversation with a public event that will bring together representatives from diverse constituencies who rely on encrypted platforms. Discussion will focus on the ramifications of these decisions, what we would like to see changed about the products, and protective principles for initiatives that aim to police private digital spaces. We hope Apple and other tech companies will join us as well. You can find out more soon about this upcoming event by visiting our events page.

Read further on this topic: 

Geofence Warrants Threaten Civil Liberties and Free Speech Rights in Kenosha and Nationwide

Fri, 09/10/2021 - 12:50pm

In the days following the police shooting of Jacob Blake on August 23, 2020, hundreds of protestors marched in the streets of Kenosha, Wisconsin. Federal law enforcement, it turns out, collected location data on many of those protesters. The Bureau of Alcohol, Tobacco and Firearms (ATF) used a series of “geofence warrants” to force Google to hand over data on people who were in the vicinity of—but potentially as far as a football field away from—property damage incidents. These warrants, which police are increasingly using across the country, threaten the right to protest and violate the Fourth Amendment. 

Geofence warrants require companies to provide information on every electronic device in a geographical area during a given time period. ATF used at least 12 geofence warrants issued to Google—the only company known to provide data in response to these warrants—to collect people’s location data during the Kenosha protests. The center of each geographic area was a suspected arson incident. However, the warrants reach broadly and require location data for long periods of time. One of the warrants encompassed a third of a major public park for a two-hour window during the protests. The ATF effectively threw a surveillance dragnet over many protesters, using “general warrants” that violate the Fourth Amendment and threaten the First Amendment right to protest free from government spying.

Police can use geofence warrants to collect information on and movements of innocent people at protests. This can include device information, account information, email addresses, phone numbers, and information on Google services used by the device owner, and the data can come from both Android and Apple devices. Someone who goes to a protest and happens to be nearby when a crime occurs may get caught up in a police investigation. Police in Minneapolis, for example, used a geofence warrant during the protests over the killing of George Floyd. The public only learned about it because the dragnet, centered around a property damage incident, caught an innocent bystander filming the protests, and Google notified him (which it doesn’t always do). The police can also use this data to create dossiers on activists and organizers.

In this way, geofence warrants also eliminate anonymity that people may rely on in order to protest or otherwise freely associate in public spaces. Law enforcement’s ability to catalogue the location of peaceful protestors will chill their exercise of their First Amendment rights. This is especially problematic when, as with the August 2020 protests in Kenosha, people are taking to the streets to hold the police themselves accountable.

Google recently published data showing that police have issued at least 20,000 warrants, just over the last three years, and the sheer volume of these warrants is increasing exponentially year over year. For example, California issued 209 geofence warrant requests in 2018, but in 2020, it issued nearly 2,000. Each warrant may result in the disclosure of information on tens or hundreds of devices. The vast majority of these warrants are issued by state and local police, which makes them difficult to track.

Google must start standing up for its users against this massive overreach. In addition to serious harms to privacy and free expression, geofence warrants operate without transparency. After years of pressure, Google has finally provided some limited data. But the vast majority of geofence warrants remain sealed, with no information from Google or law enforcement on their targets, geographic area and length of time, and their purported justifications. As a result, most people have no way of knowing whether they are caught up in one of these dragnets. Such uncertainty further chills the constitutional rights to freely protest and associate.

The Other 20-Year Anniversary: Freedom and Surveillance Post-9/11

Fri, 09/10/2021 - 12:47pm

The twentieth anniversary of the attacks of September 11, 2021 are a good time to reflect on the world we’ve built since then. Those attacks caused incalculable heartbreak, anger and fear. But by now it is clear that far too many things that were put into place in the immediate aftermath of the attacks, especially in the areas of surveillance and government secrecy, are deeply problematic for our democracy, privacy and fairness. It’s time to set things right. 

The public centerpiece of our effort to increase government surveillance in response to the attacks was the passage of the Patriot Act, which will have its own 20th anniversary on October 26. But much more happened, and far too much of it was not revealed until years later.  Our government developed  a huge and expensive set of secret spying operations that eviscerated the line between domestic and foreign surveillance and swept up millions of non-suspect Americans' communications and records. With some small but critical exceptions, Congress almost completely abdicated its responsibility to check the power of the Executive. Later, the secret FISA court shifted from merely approving specific warrants to a quasi-agency charged with reviewing entire huge secret programs without either the knowledge or the authority to provide meaningful oversight. All of these are a critical part of the legacy of September 11.

Yet even after all of these years, there’s no clear evidence that you can surveil yourself to safety.

Of course, we did not invent national security or domestic surveillance overreach 20 years ago. Since the creation of the Federal Bureau of Investigation in the early twentieth century, and the creation of the National Security Agency in 1952, the federal government has been reprimanded and reformed for overreaching and violating constitutionally protected rights. Even before 9/11, the NSA’s program FAIRVIEW forged agreements between the government agency and telecom companies in order to monitor phone calls going in and out of the country. But 9/11 gave the NSA the inciting incident it needed to take what it  has long wanted: a shift to a collect-it-all strategy inside the U.S. to match, in many ways, the one it had already developed outside the U.S., and the secret governmental support to try to make it happen. As for those of us in the general public, we were told in the abstract that giving up our privacy would make us more secure even as we were kept in the dark about what that actually meant, especially for the Muslims and other Americans unfairly targeted. 

The surveillance infrastructure forged or augmented in the post-war-on-terror world is largely still with us. In the case of the United States, in addition to the computer servers, giant analysis buildings, weak or wrong legal justifications, and the secret price tag, one of the lasting and more harmful effects has been on the public. Specifically, we are still too often beholden to the mentality that collecting and analyzing enough information can keep a nation safe. Yet even after all of these years, there’s no clear evidence that you can surveil yourself to safety. This is true in general but it’s especially true for international terrorism threats, which have never been numerous or alike enough to be used to train machine learning models, much less make trustworthy predictions. 

But there are copious amounts of evidence of ongoing surveillance metastasis: the intelligence fusion centers, the national security apparatus, the Department of Homeland Security, enhanced border and customs surveillance have been deputized to do things far afield from their original purpose of preventing another foreign terrorist attack. Even without serious transparency, we know that those powers and tools have been used for political policing, surveilling activists and immigrants, denying entry to people because of their political stances on social media, and putting entire border communities under surveillance.

The news in the past 20 years isn’t all bad, though. We have seen the government end many of the specific methods developed and deployed by the NSA immediately after 9/11.  This includes the infamous bulk call details record program (albeit replaced with an only slightly less problematic program). It also includes the NSA’s metadata collection and the “about” searching done under the UPSTREAM program off of the Internet backbone.  We also have cut back on the unlimited gag orders accompanying National Security Letters. Each of these was accomplished through different paths, but none of them exist today as they did immediately after 9/11. We even pushed through some modest reforms of the FISA court.  

But the biggest good news is the growth of encryption across the digital world, from the encrypting of links between the servers of giants like Google, to the Let’s Encrypt project encrypting web traffic, to the rise of end-to-end encrypted tools like Signal and WhatsApp that have given people around the world greater protections against surveillance even as the governments have become more voracious in their appetites for our data. Of course, the fights over encryption continue, but we should note and celebrate our victories when we can. 

Other nefarious programs continue, including the Internet backbone surveillance that EFF has long sought to bring to the public courts in Jewel v. NSA.  And in addition to federal surveillance, we’ve seen the filtering of the “collect it all” mentality manifest in our local police departments both through massive surveillance technology injections and in the slow enmeshing of local with federal surveillance. We still do not have a full public account of the types and scope of surveillance that has been deployed domestically, much less internationally, although EFF is trying to piece some of it together with our Atlas of Surveillance

Twenty years is a good long time.  We now know more of what our government did in the aftermath and we know how little safety most of these programs produced, along with the disproportionate impact it had on some of our most vulnerable communities. It’s time to start applying the clear lessons from that time and continue to uncover, question, and dismantle both the mass surveillance and the unfettered secrecy that were ushered in when we were all afraid. 

Related Cases: Jewel v. NSA

The Catalog of Carceral Surveillance: Voice Recognition and Surveillance

Fri, 09/10/2021 - 11:00am

Prison phone companies have been profiting off the desire for human connection for as long as they’ve been in business. Historically, there’s been one primary instrument for that connection — voice — and only one way to milk it for revenue: by charging exorbitant rates for phone calls. It’s been a profitable business model for both the companies and their partners, the jails and prisons. 

In recent years, though, prison reform advocates and the families of people who are incarcerated, sick of dumping their savings into the maws of these phone providers, have worked to tip this cash cow. They made enough noise that the Federal Communication Commission (FCC) set a cap on per-minute charges on interstate phone calls. 

So two of the largest providers of prison communications have initiated new ways of mining inmates for income.

Prisoners know their calls while in-custody are generally being monitored. Prisoners may also be aware that they’re being recorded (both legally and not so legally). Still, it may shock prisoners that Securus and GTL are working to monetize their ability to eavesdrop on and catalogue thousands of voices traversing the phone lines of penal facilities in nearly every state every day.

In the name of security and fraud prevention, these two prison communications companies have developed ways to store and analyze the trove of voices they’ve recorded. The companies create voice prints of people speaking on a prison’s phone lines. The companies claim that, through multi-modal audio mining, these voice prints can be matched to their databases of voices to identify individuals across phone calls and facilities. These systems are already in place throughout the country, including in Arkansas, Florida, and Texas.

These companies are already notorious for expanding the expense of prison communications  beyond the prison to the support networks and families outside. These companies are now working on expanding biometric surveillance to the greater carceral community too. The companies use the technology to identify and profile anyone who has a voice that crosses into a prison. This includes all the parents, children, lovers, and friends of incarcerated people.

In a patent published in January 2021, Securus described collecting audio samples of individuals’ voices both at the moment of intake and while inmates are communicating with people on the outside. Facilities often acquire voice samples by threatening a loss of privileges should an inmate refuse to bow to the surveillance state. 

“Here’s another part of myself that I had to give away again in this prison system,” one inmate recalled in a 2019 article by The Intercept after he was told that failing to help train the system to recognize his voice would result in a loss of his ability to use the phone. As with other efforts to mass collect biometric and personal information, what happens to the data once it’s been collected and stored, including with whom it’s shared and who has access, is still an open question.  

Securus and GTL have other ideas in the works for possible uses, particularly as these voiceprints can be connected to other databases and people, both in and out of prison. 

Securus would like to see automated background checks based on their voice recognition technology. “[D]etainees with criminal records may be released at the end of a short term stay in a holding tank or may be bonded out without being detected," says Securus.

Securus claims that it will be able to use voiceprint verification to identify “unauthorized” callers based on whether a second voice on one end of the phone line differs from an initial, authorized voice. So, if a prisoner’s girlfriend rings in and passes the phone to a child whose voiceprint weren’t vetted and approved, the phone system can boot the callers from the call altogether. 

Both companies would like to be able to map networks of individuals calling inmates, generating profiles of those who call multiple inmates or who stay in contact with their fellow prisoners once released. 

Global Tel*Link has branded its version as Voice IQ. Securus’s own Investigator Pro claims that “You’ve Never Seen Voice Biometrics Like This.”

With these new patents and initiatives, Securus and Global Tel*Link seek to identify, and almost certainly misidentify, more inmates and their families than ever before, forging new frontiers in the ways America’s prison complex can scrutinize the vulnerable. 

Don’t Stop Now: Join EFF, Fight for the Future at Apple Protests Nationwide

Thu, 09/09/2021 - 6:46pm

We’re winning—but we can’t let up the pressure. Apple has delayed their plan to install dangerous mass surveillance software onto their devices, but we need them to cancel the program entirely. Next week, just before Apple’s big iPhone launch event, we need your help to make sure the company does the right thing. 

Activists from EFF, Fight for the Future, and other digital civil liberties organizations have planned protests around the country for Monday, September 13, at 6PM PT to demand that Apple completely drop its planned surveillance software program. You can find a list of the protests here. Protests are already planned in Boston, Atlanta, Washington D.C., New York City, and Portland (OR).

EFF will host a protest at San Francisco Union Square, with signs, stickers, and speakers, but you can protest no matter where you are:

RSVP NOW

JOIN THE PROTEST AND TELL APPLE: DON'T SCAN OUR PHONES

Whether you’re a longtime fan of Apple’s products or you’ve never used an iPhone in your life, we must hold companies accountable for the promises they make to protect privacy and security. Apple has found its way to making the right choice in the past, and we know they can do it again. 

So bring a friend, wear your EFF merch, and make your voice heard! We’ve got sign designs ready for you to print below, or you can make your own! And you can always add our custom EFF "I do not consent to the search of this device" lock screen to your phone. 

And to make sure that Apple gets the message that encryption is simply too important to give up on, EFF will also be sending it straight to Apple's headquarters—by flying an aerial banner over the campus during their September 14 iPhone launch event.

On September 7, we delivered nearly 60,000 petitions to Apple. Over 90 organizations across the globe have also urged the company not to implement them. We’re pleased Apple is now listening to the concerns of customers, researchers, civil liberties organizations, human rights activists, LGBTQ people, youth representatives, and other groups, about the dangers posed by its phone scanning tools. But we can’t let up the pressure until Apple commits, fully, to protecting privacy and security. 

COVID Protocol: We are committed to upholding public health guidelines related to COVID. Please don't attend if you have any COVID symptoms, and we encourage masking and social distancing.

Below you can find printable images for your protest:

The Catalog of Carceral Surveillance: Prison Gaming and AR/VR Services

Thu, 09/09/2021 - 9:39am

No matter how many rights are taken away from people in prison, no matter how brutally they are treated by the prison industrial complex, there is one right so fundamental, so essential, that even controversial prison telecommunications company Securus can't bear to see it violated: the right to find new ways to extract money from prisoners and their families. 

In one of their newest patents, granted February of 2021, Securus describes their latest revolutionary technology. A tablet, which would be issued to individual inmates to allow them to make video calls, access information about their case, and give them the opportunity to pay money for temporary access to video games. WIth this new invention Securus re-asserts their determination to extract every last penny from incarcerated people. Just because you are in prison doesn’t mean you get a pardon from the insatiable maw of capitalism. 

Not only is this tablet useful for extracting money from prisoners. Securus also proposes that it can also be used as a biometric surveillance device. According to the patent: “the monitoring system may be configured to collect sensor information from the resident communications device in order to detect unsafe conditions during a gaming session. For instance, the monitoring system may collect sensor information from the resident communications device indicating a level of stress or agitation by the resident during game play. One or more gyroscope sensors included within the resident communications device may be used to detect unsafe handling of the resident communications device. Heart rate and blood pressure information detected by sensors worn by the resident may be transmitted via RFID (Radio Frequency Identification) to the resident communications device.”

Taking this to its logical conclusion, the sensors and biometrics in such a handheld device could be used as an indicator of a prisoners mood, if the prisoner gets excited or angry at the game the system could determine that the prisoner is uncooperative or a recidivism risk, this data could potentially be used in parole hearings or when deciding to give punishments/rewards to prisoners. If Securus has its way, your “gamer moment” could land you in trouble and even then you will have no escape from micro transactions. 

Global Tel-Link

Not to be out done, Global Tel-Link has filed a patent for a VR service for prison inmates. It would, according to their own dystopian description, allow the prisoner to, “for a brief time, imagine himself outside or away from the controlled environment.”  

Global Tel-Link primarily intends for this prison VR system to be used as a replacement for prison visitation so that inmates could interact with friends and family in a controlled and monitored virtual environment. All of which Global Tel-Link would presumably charge for. 

But prison inmates aren’t the only one subject to monitoring and technological “assistance” from Global Tel-Link. The company has also filed a patent for Augmented Reality to assist and surveil the prison guards: essentially “Google Glass: Prison Edition.” 

An image from the patent, Global Tel-Link’s AR device patent

According to the patent, Global Tel-Link’s AR device worn by guards would have myriad functions. It will allegedly be able to perform facial recognition on inmates and display vital details such as their name and history. It also purports to display the location of any rogue radio frequency signals that could indicate contraband cell phones. Further, it reportedly has object detection powers, and can highlight any dangerous or contraband objects such as weapons or open doors which should be closed. 

The patent also boasts that the AR devices can track prison guards. Global Tel-Link states: “In some situations, activities of guards lack monitoring, giving some staff/guards the opportunity to get involved in importation of contraband goods into the controlled environment.” Who watches the watchers? Apparently Global Tel-Link does. 

The system can also make sure prison guards are staying on task. According to the patent, the AR system “stores ... a set of criteria defining whether a monitored activity is determined as ‘normal’ and ‘abnormal.’ For example, the set of criteria includes the time range to complete an assignment, the designed path for an assignment, the dwelling time at one location, the heart rate range, the regular presence locations of inmates, etc.”

We are skeptical about the ability of Global Tel-Link to fit so many advanced technologies into a wearable device, especially at a price which will be comfortable to the owners of public and private prisons. However, this will certainly one day be possible, and on that day the guard will join the inmate under the omnipresent eye of the panopticon. 

EFF Activists To Lead Protest Demanding Apple Cancel iPhone Scanning Program and Keep Its Privacy Promises To Customers

Thu, 09/09/2021 - 8:51am
Demonstrations Planned at Apple Stores in San Francisco, Boston, Portland, and Atlanta

San Francisco—Electronic Frontier Foundation (EFF) activists will lead a protest on Monday, September 13, at 6 pm PT, demanding Apple drop its planned iPhone surveillance software program, which will endanger the privacy and security of its customers and open a backdoor to increased surveillance around the world.

Demonstrators from EFF and Fight For the Future (FFTF) will rally in front of Apple’s flagship store to send a message to the iPhone giant that the program, a shocking about-face for users who have relied on the company’s leadership in privacy and security, must be cancelled.

To make sure that Apple gets the message that encryption is simply too important to give up on, EFF will also be sending it straight to Apple's headquarters—by flying an aerial banner over the campus during their September 14 iPhone launch event.

“Users want the devices they have purchased to work for them—not to spy on them for others,” said Joe Mullin, a policy analyst on EFF’s activism team who will speak at Monday’s protest. “Delaying the program is a step in the right direction, but it is not enough. Apple needs to take the next step to protect its users and abandon the program.”

Protests at Apple Stores, organized by EFF, FFTF, and OpenMedia, are planned in Boston, Portland, Atlanta, and other cities. A map of the locations can be found at https://www.nospyphone.com/#map.

EFF and partners have delivered petitions with 60,000 signatures telling Apple not to scan customers’ phones. In addition, EFF joined the Center for Democracy and Technology (CDT) and more than 90 other organizations in sending a letter urging Apple CEO Tim Cook to stop the company’s plans to weaken privacy and security on Apple’s iPhones and other products.

The iPhone surveillance software will continuously scan user photos to compare them to a secret government-created database of child abuse images. The parental notification scanner uses on-device machine learning to scan messages, then informs a third party, which breaks the promise of end-to-end encryption. Apple’s new surveillance infrastructure will be all too easy for governments to redirect to greater surveillance and censorship.

What:
Don’t Scan Our Phones Protest

Speaker:
EFF Policy Analyst Joe Mullin

Where:
Apple Store
300 Post Street
San Francisco CA 94108

When:
Monday, September 13
6 pm PT

For more about Apple's program:
https://www.eff.org/deeplinks/2021/08/apples-plan-think-different-about-encryption-opens-backdoor-your-private-life

Contact:  JoeMullinPolicy Analystjoe@eff.org

EFF to Court: FOIA Requires ICE to Release Arrest and Deportation Database Records With Privacy Protections

Wed, 09/08/2021 - 3:38pm

The Freedom of Information Act requires U.S. Immigration and Customs Enforcement (ICE) to disclose deidentified data that would enable greater public oversight of the agency while protecting the privacy of immigrants and others, EFF argued in an amicus brief filed last month in federal court.

The case, ACLU v. ICE, centers on a request by the American Civil Liberties Union (ACLU) to obtain data from ICE databases that show how ICE arrests, classifies, detains, and deports individual immigrants. The databases link this information to particular individuals based on a unique identifier, known as an “A-number,” that ICE assigns to people. An A-number connects the thread of records on each of ICE’s interactions with an individual, giving a look into how the agency is targeting and treating individuals over time. However, disclosing someone’s A-number to the public could invade their privacy by linking this immigration history to them.

To get a better picture of ICE’s activities over time without disproportionately invading individuals’ privacy, ACLU requested that the agency replace each A-number with a new, unique identifier in the released records. A federal district court in New York denied ACLU's request, ruling that FOIA did not require ICE to substitute deidentified values for A-numbers. ACLU appealed to the U.S. Court of Appeals for the Second Circuit. 

EFF’s brief argues that ACLU’s proposed solution “is a vital—and sometimes the only—way to protect legitimate privacy concerns while ensuring that FOIA remains a robust tool for transparency and accountability.” EFF’s brief explains that ACLU’s proposal is effectively a form of redaction because it removes the identifying information in each A-number while keeping the “relational information” that connects individual records in ICE’s database.

Courts have always balanced FOIA’s primary goal of transparency with privacy by releasing records in redacted and modified forms. This is especially important for public oversight of the databases that have proliferated and grown at all levels of government. EFF’s brief discusses many examples, such as the Department of Homeland Security’s HART database. This database stores fingerprints, face and iris scans, and other sensitive information on immigrants and a recent Privacy Impact Assessment found several flaws with its privacy protocols. EFF filed amicus briefs in other cases requesting government database records in privacy-protecting forms, such as aggregate data.

The brief also describes how many other courts have rightly approved redaction methods to public records even when the redactions modify the underlying data. The California Supreme Court, for example, suggested substituting unique identifiers for license plate numbers in a request, co-litigated by EFF, for records on police use of automated license plate readers. In another context, government agencies often blur video records to prevent identification of people captured in the video while providing the recording’s context.

When courts apply it properly, FOIA is a powerful tool for the public to protect privacy and watchdog government abuse of massive databases. That is why EFF’s brief urged the appellate to uphold ACLU’s substitution procedure to “ensure that FOIA can help the public understand the scope of the government’s actions without intruding on the privacy of individuals whose data is found in government records systems.”

EFF to Council of Europe: Cross Border Police Surveillance Treaty Must Have Ironclad Safeguards to Protect Individual Rights and Users’ Data

Wed, 09/08/2021 - 2:56pm

This is the third post in a series about recommendations EFF, European Digital Rights, the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, and other civil society organizations have submitted to the Parliamentary Assembly of the Council of Europe (PACE), which is currently reviewing the Protocol, to amend the text before its final approval in the fall. Read the full series here, here, here and here.

A global cybercrime treaty is set to reshape how cross-border police investigations are conducted. The treaty, referred to by the inauspicious moniker “Second Additional Protocol to the Council of Europe’s Budapest Convention on Cybercrime,” grants law enforcement intrusive new powers while adopting few safeguards for privacy and human rights. 

Many elements of the Protocol are a law enforcement wish list—hardly surprising given that its drafting was largely driven by prosecutorial and law enforcement interests with minimal input from external stakeholders such as civil society groups and independent privacy regulators. As a result, EFF, European Digital Rights, the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, and other civil society organizations have called on the Parliamentary Assembly of the Council of Europe (PACE), which is currently reviewing the Protocol, to amend the text before its final approval in the fall. 

International law enforcement investigations are becoming increasingly routine, with policing forces seeking access to digital evidence stored by service providers around the globe. But in the absence of detailed and readily enforceable international human rights standards, law enforcement authorities around the world are left to decide for themselves the conditions under which they can demand access to personal information. As a result, the lowest common denominator in terms of privacy and other protections will often prevail in cross-border investigations.

Unfortunately, the Council of Europe’s Second Additional Protocol fails to provide the type of detailed and robust safeguards necessary to ensure cross-border investigations embed respect for human rights. Quite to the contrary, the Protocol avoids imposing strong safeguards in an active attempt to entice states with weaker human rights safeguards to sign on. To this end, the Protocol recognizes many mandatory and intrusive police powers, coupled with relatively weak safeguards that are largely optional in nature. The result is a net dilution of privacy and human rights on a global scale.

Cross-border investigations raise difficult questions, as widely varying legal systems clash. How do data protection laws regulate police collection and use of personal data when the collection process spans multiple jurisdictions and legal systems? More specifically, what kinds of legal safeguards from existing human rights and data protection toolkits will govern these and other forms of evidence collection across borders? 

Although data protection laws generally apply to both public and private sectors, many countries have failed to set high standards. Some countries have exempted law enforcement data collection and processing of personal data from their data protection laws, while other privacy laws like the United States’ Privacy Act do not apply to foreigners--non-U.S. persons who are not legal permanent residents. 

Many different kinds of international evidence-gathering and international law enforcement cooperation happen today, but the draft Protocol seeks to establish a new global standard that will govern several aspects of global policing moving forward. We’ve described some of its more intrusive powers in other posts here, and here.

The Protocol also includes some human rights and privacy safeguards that apply when states rely on powers outlined by the Protocol. These safeguards are concentrated in Chapter III, Articles 13 and 14 of the Protocol, and their shortcomings will be explored in the remainder of this post.

Article 13 recognizes a general obligation to ensure adequate protections are in place for human rights and civil liberties. The inclusion of this safeguard is important, particularly the obligation to incorporate the principle of proportionality in determining the scope of human rights safeguards. But Article 13 imposes few specific restrictions, and signatories are largely left to determine what protections are “adequate” and “proportionate” on the basis of national law. So, in practice, there are few direct obligations for states to impose specific safeguards in specific investigative contexts. 

Article 14, by contrast, does impose a number of detailed, specific data protection obligations that would apply to any personal information obtained through the Protocol’s new law enforcement powers. However, Article 14’s standards are weak and even these weak safeguards can be circumvented by any two or more signatories by agreement.

Lowering the Bar for Data Protection

The standards set by Article 14 fail to meet modern data protection requirements and, in places, actively seek to undermine emerging international standards. For example, Article 14 obligates parties to ensure that personal data collected through the Protocol’s powers shall be used in a manner that is consistent with and relevant to the criminal investigative purposes that prompted their collection. However, contrary to most other data protection instruments, Article 14 data protection safeguards don’t require all processing of personal data be “adequate, fair and proportionate” to its objective, while Article 13 requires only “adequate” safeguards and a general respect for the principle of proportionality. Adequate, fair, and proportionate are important, distinctive conditions for accessing personal data recognized in several modern data protection legislations across the world. Each term imposes different requirements when applied to the collection, use, and disclosure of personal information. The absence of all three specific terms in the Protocol is troubling, as it indicates fewer, weaker, and outdated conditions to access data will be allowed and tolerated. 

Article 14’s safeguards are also problematic in that they do not require law enforcement to be subject to oversight that is completely independent. Oversight needs to be impartial and free from direct external influences, but Article 14’s explanatory text (which was never subject to public consultation) allows oversight bodies to be subjected to indirect influence. Under Article 14, for example, many oversight functions can be conducted by government officials housed in the same agencies directing the cross-border investigations being supervised. In addition, while oversight officials must not receive instruction from the state regarding the outcome of a particular case, Article 14 allows states to exert instruction and control over general oversight operations. Article 14 even expressly prohibits Parties from requiring the use of independent regulators to protect the privacy of personal data transferred to other Parties through the Protocol’s investigative powers. All in all, Article 14 fails to meet minimal standards of independent oversight.

Finally, Article 14 of the Protocol also outlines some safeguards for biometric data, but ultimately these are insufficient and undermine a growing international recognition that biometric data is sensitive and requires additional protection in all instances. Biometric data involves mathematical representations of people’s personal features such as their finger, voice or iris prints and fuels a range of intrusive technologies such as facial recognition. Because of its ability to persistently identify individuals through automated means, biometric information is generally considered sensitive by courts and legislatures at the Council of Europe and around the world

Despite this growing recognition of the sensitive nature of biometric information, Article 14 prohibits states from using additional safeguards unless biometric information can be shown to pose an additional risk to privacy. While the Protocol provides little guidance regarding what might constitute this added risk, the result is to provide a narrower scope of protection to biometric data than required by competing laws such as the GDPR, the EU Law Enforcement Directive, and the Council of Europe’s own Convention 108+, each of which recognize the sensitive all biometric data in all contexts. This creates ambiguity in defining the scope of protection applied to bilateral transfers, as many anticipated signatories to the Protocol have also signed Convention 108+ and committed to its higher standards of biometric protection while many others have not. While the explanatory text appears to acknowledge that parties bound by Convention 108+ will need to apply that treaty’s heightened biometric protections, Article 14 also prohibits signatories from applying any additional “generic” data protection conditions to any data transfer between signatories.  Moreover, many Parties to the Protocol will not be bound by Convention 108+ and will be prevented from ensuring the appropriate level of protection is applied when sensitive biometric information is transferred to other jurisdictions by law enforcement. 

For all of these reasons, we have asked that the Protocol be amended so that signatories may refuse to apply its most intrusive powers (Articles 6, 7 and 12) when dealing with any other signatory that has not also ratified Convention 108+.

Anyone Can Ignore Even These Safeguards

Even the weak standards applied by Article 14 are effectively optional under the Protocol. Signatories are explicitly permitted to bypass these safeguards through various mechanisms, none of which provide any assurance that adequate privacy protections will be in place.

For example, any two or more signatories can enter into an international data protection agreement that will supersede the safeguards outlined in Article 14. There is no obligation to ensure that superseding agreements provide an adequate level of protection, or even a level  comparable to the safeguards that are actually set out in Article 14. And parties can continue to rely on the Protocol’s law enforcement powers while applying weaker safeguards established in any such superseding agreement instead of the ones in Article 14. Indeed, Article 14’s explanatory text presents the so-called EU-US ‘Umbrella’ agreement—which provides safeguards and guarantees of lawfulness for data transfers—as a paradigmatic example of a qualifying agreement. But questions have been raised as to whether the Umbrella agreement is in compliance with the EU Charter of fundamental freedoms.

Even if no binding international agreement is in place, Parties can bypass the safeguards in Article 14 by entering into ad-hoc agreements with each other. These agreements need not be formal, comprehensive, binding, or even public. If a joint investigation between law enforcement authorities in multiple jurisdictions is underway, individual frontline police officers can even decide to adopt their own agreements, raising the prospect that privacy safeguards will be sacrificed for investigative convenience. (A more detailed analysis about the Protocol’s joint investigation section will be published soon.)

To ensure that there are at least some baseline safeguards in place, we have therefore recommended that the Protocol be amended to ensure that the specific protections outlined in Article 14 establish a minimum threshold of privacy protection. These may be supplemented with more rigorous protections, but cannot be replaced by weaker standards. 

Limits on Personal Data Transfer Limits

Article 14 also undermines a key safeguard used by independent privacy regulators in cross-border investigations, where there is frequently no direct opportunity to enforce safeguards once personal data has been transferred by law enforcement to another country. Because of this, many data protection regimes require independent regulators to block data transfers to states that fail to provide certain minimum levels of privacy protection. Article 14 places strict limits on data protection authorities’ ability to stop law enforcement from transferring personal data to other jurisdictions, removing a critical tool from the human rights protection toolkit.

Under most legal systems that rely on data transfer restrictions as a privacy safeguard, independent regulators determine whether another state’s legal system provides sufficient safeguards to permit law enforcement transfers. However, Article 14 “deems” that its safeguards (or any safeguards adopted in any international data protection agreement between any two parties to the Protocol) are sufficient to meet any signatory’s national standards, removing this important adjudicative role from independent regulators. The Protocol does allow signatories to suspend data transfers if Article 14’s own safeguards are breached, but only with substantial evidence of a systematic or material breach, and only after engaging in consultation with the suspended country. By setting a restrictive evidentiary standard and obligating the executive branch of a state to enter negotiations prior to suspending transfers, Article 14 further undermines the ability of privacy regulators to ensure an adequate level of data protection. 

To prevent the Protocol from diminishing the important role played by data protection authorities in adjudicating and safeguarding privacy in cross-border law enforcement transfers, we have asked that Article 14’s attempts to limit data transfer restrictions be removed. 

Conclusion

Some have defended the Second Additional Protocol in its current configuration, saying it's needed to forestall efforts that might lead to a more intrusive framework for cross-border policing. Specifically, Russia’s proposal for another global comprehensive cybercrime treaty is gaining support at the United Nations. The UN treaty would address many of the same investigative powers addressed in the Protocol and  the Budapest Convention. 

Civil society is raising alarm bells about the Russian-led cybercrime initiative is. Human Rights Watch has pointed out, for example, that the treaty is being led by countries that use cybercrime laws as a cover to crack down on rights. The Council of Europe should be advancing a human rights-respecting alternative to the UN initiative. But the Protocol, as it currently stands, is not it. 

PACE has an opportunity to substantially improve human rights protections in the Protocol by recommending to the Council of Ministers—CoE's decision-making body—amendments that will fix the technical mistakes in the Protocol and strengthen its privacy and data protection safeguards. With detailed law enforcement powers should come detailed legal safeguards, not a one-sided compromise on privacy and data protection.

Read more on this topic:

The Catalog of Carceral Surveillance - Mobile Correctional Facility Robots

Wed, 09/08/2021 - 11:52am

There are too many people in U.S. prisons. Their guards are overworked, underpaid, and prone to human errors. Some have taken this as a sign that we need to rework our criminal justice system. Prison technology companies have another approach prepared: robots. 

Human guards, of course, have pesky needs like work breaks and food. They’re entitled to paychecks and sick days. They possess flaws that can lead to outbursts of violence, racism, and sexual harassment. 

“The ratio of prisoners to prison guards is too high,” wrote prison telecommunications company Securus in a recent patent application, and “a substantial amount of the total funds available to correctional facilities is spent on guards, leaving little money left over to pay for programs to reduce recidivism.”

Securus, a company notorious for overcharging inmates for phone calls, but like its major competitor Global Tel*Link, it has been diversifying its offerings in the years since federal efforts to rein in prison phone costs.  

It’s a trash can empowered to decide if you get your commissary items or an electroshock. What could go wrong?

According to a patent application for “mobile correctional facility robots” filed by Securus, these robots can perform many tasks: delivery of parcels and visitors, monitoring of the environment for suspicious words or actions, and execution of “non-lethal force” (actually lethal in many cases) such as “an electroshock weapon, a rubber projectile gun, gas, or physical contact by the robot with an inmate.” 

The company states that the robots can deploy such force “autonomously” or “at the remote direction of a human operator.” In fact, these robots can perform many of the same responsibilities as human prison guards but without a lot of those aforementioned pesky human needs. 

Securus also suggests that the cost savings of this approach could go to harm reduction programs — though we suspect they’re likely to instead go toward increasing shareholder profits.

The flowchart that Securus robots would use to make decisions.  “Enforcement action” here is a euphemism for potentially lethal disciplinary methods including rubber bullets and electrical shock.

A “Central Controller” can direct multiple robots to work together. While Securus has said little on it, this central controller is a computer presumably using some form of artificial intelligence to direct multiple robots to work together as they monitor an area for bad words or perform an “enforcement action.” One might be concerned about granting this robot the ability to do something like electrocute an inmate it has decided is threatening—especially given the well documented shortcomings of AI systems.

Depending on the data set such an AI is trained on, it might decide that a hug is a  threatening gesture, or a fistbump, or a high five. If someone were to trip and fall that might be seen as a threatening gesture by the AI. AI also tends to reflect cultural biases and if the people who are creating the training data tend to view people of color as more intimidating then this bias will be infused into the AI as well. 

I’m afraid I can’t let you buy peanuts from the commissary, Dave.

There are, of course, many activities that, without context, might seem strange, perhaps even threatening: sitting in a strange position, having a non-violent psychological episode, or holding a threatening broom while performing assigned cleaning duties. Again, thanks to the well documented infallible nature of AI, we are certain autonomous robot prison guards will never inappropriately deploy these potentially lethal weapons against an inmate undeservedly. 

You will comply for the benefit of Securus shareholders.

Securus robots won’t just be versed in the punishments of today; they’ll also be equipped to detect the crimes of tomorrow. With the power to monitor for “events of interest,” the robots may identify a predetermined spoken word or behavior as a cause for reasonable suspicion.

On the surface, the Securus robots look similar to the Knightscope guard robots that have been deployed in parks, garages, and other public areas. Hopefully, the Securus robots won’t have the same problems with drowning, stairs, blindness, or interference with their LIDAR based navigation systems

A Knightscope robot drowns itself after learning it won’t be receiving a paycheck this month. 

Thanks to the well documented infallible nature of AI and Securus, those who run a prison or Immigration and Customs Enforcement detention camp will soon have an alternative to human guards: robots able to dole out twice the less-lethal force for half the cost.

The Catalog of Carceral Surveillance: Exploring the Future of Incarceration Technology

Tue, 09/07/2021 - 2:57pm

Prison technology and telecom companies such as Securus and Global Tel*Link are already notorious for their ongoing efforts to extract every last penny and destroy any last shred of privacy afforded to incarcerated people. They have so far succeeded in their goals, operating in thousands of prisons in every state in the U.S. But they are not content to rest on their laurels.

Securus and GTL have spent the last several years inventing new and improved ways to extract money from incarcerated people, violate human rights, and surveil not only prisoners but their families, and friends.

Over the next two weeks we will be shedding light on some of the patents and technologies these companies have been working on, which either are already actively used or may soon be coming to prisons across the country.

Our hope is that through this project we will expose some of the horrifying technologies that Securus and GTL are working on, exposing these ideas to the public and not allowing them to flourish in the obscurity of patent documents.

View the catalog of prison surveillance below. New posts will be added daily

*Do you have experience with these technologies? We'd love to hear from you. Get in touch via info@eff.org.*

The Catalog of Carceral Surveillance: Monitoring Online Purchases of Inmates’ Family and Friends

Tue, 09/07/2021 - 2:51pm

Prison wardens and detention center administrators have, for years, faced what they believe to be a serious problem. While they can surveill every aspect of the lives of the people imprisoned in their facilities, they typically have no ability to violate the privacy and civil liberties of the friends and family of incarcerated people. Fortunately for prison administrators, Securus has a solution. 

Securus is one of the prison telecommunications companies notable for overcharging inmates for the privilege of communication with their loved ones. They have filed a patent application describing a method of “linking controlled-environment facility residents and associated non-resident telephone numbers to ... e-commerce accounts associated with the captured telephone number” and “information about purchases made by a non-resident associated with the accessed e-commerce account.”   

In other words, Securus wants to capture the phone numbers of everyone a prisoner talks to, including friends and family, and use that information to scrutinize their e-commerce purchases. 

In their patent application, Securus provides the following example of how prisons might use this invasive and dangerous technology. 

The flowchart submitted with the patent describing how the e-commerce surveillance system would work. 

“[I]nmate call records may show that an inmate made calls to their girlfriend before escaping. Investigators question the girlfriend, but she provides no help. However, investigators employ embodiments of the present systems and methods, using the DTN [Dialed Telephone Number] used by the inmate to call the girlfriend, to find that the girlfriend had purchased skiing equipment through an e-commerce app associate with the DTN and made a reservation through another (or the same) e-commerce app (such as a homestay app) for a house in a remote area in the Colorado mountains. Investigators find the escaped convict and the girlfriend at the house using the data obtained through the invention.

Securus’s patent would be a massive civil liberties violation, and a dangerous expansion in the powers of prison administrators to surveill people not under their carceral control. One may wonder, though, how Securus would implement this patent in practice. Surely Securus isn’t suggesting that prison wardens would hack the accounts of people who make phone calls to prisons. And not many people would willingly give a prison administrator access to their Amazon or other online shopping account. 

Luckily for wardens, Securus has come up with a “solution” for this problem: an end-user license agreement. 

In their patent application Securus suggests that prison officials could obtain a waiver from anyone wishing to communicate with an incarcerated person, that would allow prison officials to then root around in the proverbial sock drawer of that person's e-commerce purchases. Understanding that very few people would knowingly agree to this waiver, Securus helpfully suggests: “Such a waiver may be part of an end user agreement associated with use of controlled-environment facility communication services, including, such as by way of example, a controlled-environment facility communications app.” The patent continues: “the waiver may allow the resident's controlled-environment facility, a controlled-environment facility communication vendor, law enforcement, and/or the like, to garner passwords from the non-resident's mobile device, computer, etc. to use for such access.”

One solution to the privacy nightmare that is Securus is following in the state of Connecticut’s lead. In June 2021, Connecticut became the first state in the United States to make all calls in and out of prisons free. Other states should do the same--and given the massive wave of activism around prison abolition and reform, it seems likely that they might. 

Securus seeks to coerce a Faustian bargain upon people who love someone in prison: give us the power to monitor your online purchases, or wait to talk to your loved one only after we let them go. Securus should withdraw this odorous idea.

Video Briefing Wednesday: EFF and Partners Will Deliver to Apple Petitions with 50,000 Signatures Demanding End to Phone Scanning Program

Fri, 09/03/2021 - 2:22pm
Apple Customers Tell Tech Giant: Don’t Scan Our Phones

San Francisco—On Wednesday, September 8, at 9 am PT, internationally renowned security technologist Bruce Schneier and EFF Policy Analyst Joe Mullin will speak on a panel with digital rights activists delivering petitions with more than 50,000 signatures calling on Apple to cancel its iPhone surveillance software program. The briefing will be held via Zoom.

Apple’s announcement last month that it plans to install two scanning systems on all of its phones was a disappointment that stands to shatter the tech giant’s credibility on protecting users’ privacy. The iPhone scanning harms privacy for all iCloud photo users, continuously scanning user photos to compare them to a secret government-created database of child abuse images. The parental notification scanner uses on-device machine learning to scan messages, then informs a third party, which breaks the promise of end-to-end encryption.

Acknowledging the outcry by customers and activists against the program, Apple said it’s gathering more feedback and making improvements before launching the scanning features. This does not go far enough. The petitions call on Apple to abandon its surveillance plan, which goes against the company’s long-standing commitment to privacy and security, as well as its history of rejecting backdoors to access content on our phones. EFF, Fight for the Future, and OpenMedia gathered signatures for the petitions that will be emailed to Apple on September 8. EFF is one of 90 organizations that signed on to a letter urging Apple CEO Tim Cook to stop the company’s plans to weaken privacy and security on Apple’s iPhones and other products.

Schneier and Mullin will discuss how Apple’s program opens the door to other surveillance. It will give ammunition to authoritarian governments wishing to expand surveillance and censorship.

WHAT:
Don’t Scan our Phones Petitions to Apple

WHEN:
Wednesday, September 8, 9 am PT

WHO:
Bruce Schneier, Security Technologist
Caitlin Seeley George, Director of Campaigns and Operations, Fight for the
Joe Mullin, Policy Analyst, Electronic Frontier Foundation
Matt Hatfield, Director of Campaigns, OpenMedia

RSVP for Live Zoom Link:
https://us02web.zoom.us/meeting/register/tZYvduytrD0vHNM122yw2kAAqnfyk9EQZpdg

For more on Apple’s phone scanning:
https://www.eff.org/deeplinks/2021/08/apples-plan-think-different-about-encryption-opens-backdoor-your-private-life
https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-will-come-apple-has-opened-backdoor-increased-surveillance

Contact:  JoeMullinPolicy Analystjoe@eff.org CaitlinSeeley GeorgeDirector of Campaigns and Operations, Fight for the Futurecseeleygeorge@fightforthefuture.org MattHatfieldDirector of Campaigns, OpenMediamatt@openmedia.org

Delays Aren't Good Enough—Apple Must Abandon Its Surveillance Plans

Fri, 09/03/2021 - 12:56pm

Apple announced today that it would “take additional time over the coming months to collect input and make improvements” to a program that will weaken privacy and security on iPhones and other products. EFF is pleased Apple is now listening to the concerns of customers, researchers, civil liberties organizations, human rights activists, LGBTQ people, youth representatives, and other groups, about the dangers posed by its phone scanning tools. But the company must go further than just listening, and drop its plans to put a backdoor into its encryption entirely.

The features Apple announced a month ago, intending to help protect children, would create an infrastructure that is all too easy to redirect to greater surveillance and censorship. These features would create an enormous danger to iPhone users’ privacy and security, offering authoritarian governments a new mass surveillance system to spy on citizens. They also put already vulnerable kids at risk, especially LGBTQ youth, and create serious potential for danger to children in abusive households.

The responses to Apple’s plans have been damning: over 90 organizations across the globe have urged the company not to implement them, for fear that they would lead to the censoring of protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children. This week, EFF’s petition to Apple demanding they abandon their plans reached 25,000 signatures. This is in addition to other petitions by groups such as Fight for the Future and OpenMedia, totalling well over 50,000 signatures. The enormous coalition that has spoken out will continue to demand that user phones—both their messages and their photos—be protected, and that the company maintain its promise to provide real privacy to its users. 

Without Changes, Council of Europe’s Draft Police Surveillance Treaty is a Pernicious Influence on Latam Legal Privacy Frameworks

Fri, 09/03/2021 - 11:18am

The Council of Europe (CoE) is on track to approve the Second Additional Protocol to the Budapest Cybercrime Convention, which will set new invasive international rules for law enforcement access to user data and cooperation between States conducting criminal investigations. In our recent joint civil society submission to the CoE’s Parliamentary Assembly we recommended 20 solid amendments to preserve the Protocol’s objective—facilitating efficient and timely cross-border investigations between countries with varying legal systems—while embedding a much-needed baseline to safeguard human rights. In this post, the second in a series about our recommendations, we examine how the current Protocol's text threatens privacy rights in Latin America, a region with deeper challenges for fulfilling human rights safeguards and the rule of law compared to many European countries.

Article 7 of the Protocol is among the most troubling provisions, raising privacy concerns regarding police cross-border access to subscriber data. As we have written, Article 7 establishes procedures for law enforcement in one country to request access to subscriber data directly from service providers located in another country under the requesting country’s legal standards. This can create unjustifiable asymmetries in national law by applying to foreign authorities a more permissive, less privacy-protective legal basis to access subscriber data than what is granted to local law enforcement agencies under its own local law.

Article 7 focuses on authorizing police access to subscriber data. Why does subscriber data matter? Your IP address can tell authorities what websites you visit and who you communicate with. It could reveal otherwise anonymous online identities, your social networking contacts and, even at times, your physical location via GPS. Police can request your name, the subscriber data to link your identity to your online activity, and that can be used to create a nicely detailed police profile of your daily habits .

When and How Cross-Border Police Direct Cooperation Rules Will Perniciously Affect Latin American Countries

We see at least two possible scenarios for how pernicious Article 7 could be on Latam frameworks for lawful access to communications data in criminal investigations. First, this provision can serve as an influence to drive down standards in the region for accessing subscriber information (and unveiling a user’s identity). Second, it can potentially export globally a broader definition of what constitutes “subscriber information,” expanding the categories of communications data encompassed by a third-class protection standard. All in all, Article 7 contains serious flaws that should be fixed before it can serve as a robust rights-protective model to pursue and endorse.

With CoE's final adoption of the draft Protocol, countries in Latin America already parties to the original 2001 Budapest Convention will be able to ratify or accede to the Second Protocol. To date, those countries are Argentina, Chile, Costa Rica, Colombia, Dominican Republic, Panama, Paraguay, and Peru. Brazil and Mexico were invited to become parties and currently act as observers. The Budapest Convention, the first international treaty addressing internet and computer crime by harmonizing national laws and increasing cooperation among nations, has been influential in the region, acting as a model for cybercrime regulation and production of electronic evidence, even for countries that are not parties of the Convention. As many law enforcement authorities want access to potential electronic evidence across borders, Latin American countries will likely seek accession to the Protocol because of its cooperation rules. But if the final text passes without our recommended amendments, the Protocol will encourage Parties to reinforce weaker privacy standards already in place in different Latam countries instead of fostering a growing trend in other nations in the region where domestic laws or court judgments have provided stronger human rights protections. 

That’s because of another concerning mandate in Article 7: in countries with laws that prevent service providers from voluntarily responding to subscriber data requests without appropriate safeguards—such as a reasonable ground requirement and/or a court order—Article 7 requires these legal “impediments” be removed for cross-border requests. Those countries with higher standards are allowed to reserve the right not to abide by Article 7, but only at the time of the signature/ratification/approval, and not at a later stage. This means that in the future, Parties will be stuck with the inherent flaws in Article 7, and will be unable to designate Article 8—another, slightly more privacy-protective provision in the Protocol for getting data across borders—as the sole means of accessing some or all types of subscriber data, even if their legal systems, because of new laws or court decisions, eventually recognize additional safeguards for subscriber information.

Moreover, although the Protocol stipulates important data protection safeguards, its current text contains provisions that will allow State parties to bypass them (as we will further explain in the third post of this series).

Levelling Down Subscriber Information Protections

Countries in the region have adopted varying degrees of privacy safeguards in criminal investigations. Mexico's legal framework has good standards, at least on the books, requiring judicial authorization for disclosing stored communications data, including subscriber information, and calling for authorities to specify targets and time periods as well as justify the need for the information sought. In Brazil, when it comes to accessing internet users’ subscriber data (dados cadastrais, in Portuguese), authorities with express legal power to access subscriber information aren’t required to obtain a warrant to access the data. Authorities' direct requests to service providers must indicate the explicit legal basis for the request and must specify the individuals whose information is being sought (generic and non-specific collective requests are prohibited).

But Brazilian police agencies dispute that direct requests are authorized only for certain legally specified cases and push for a broader interpretation of their powers. The National Association of Mobile Service Providers (ACEL) went to Brazil's Supreme Court to assert users have constitutional privacy protections when the government is requesting communications data, including subscriber information. But with the case still pending in court, a proposal to reform the country's Criminal Procedure Code is looking to side with law enforcement by generally authorizing police and prosecutors to directly request subscriber data from service providers.

This push to allow law enforcement agents to access subscriber data without a prior court order reflects bad practices adopted in some Latin American countries like Panama, Paraguay, and Colombia. In Colombia, a simple administrative resolution sets out that telecommunications service providers must allow authorities to remotely connect with their systems to obtain user information. Other countries, like Argentina, do not have legal rules or case law specifically addressing law enforcement access to subscriber information.

The Protocol’s Article 7 rules for service providers' direct cooperation with law enforcement aligns with the region’s weaker privacy standards. It also hinders companies’ best practice commitments to interpret local laws in a way that  provides the most privacy protections for users. In collaboration with EFF, leading digital rights groups in Latin America and Spain have been pushing companies to make greater commitments on that front. Who Defends Your Data assessments, inspired by EFF's Who Has Your Back project, have encouraged companies to improve their privacy practices in recent years, demonstrating that local privacy laws should be the ground, and not the ceiling, for companies' efforts in supporting users’ fundamental rights. 

For example, Chilean ISPs have adopted best practices to require a judicial order before handing over users’ information (see GTD's and Claro's law enforcement guidelines) and to only comply with individualized personal data requests (in addition to Claro, see Entel's guidelines). Chilean law does not explicitly create an artificial distinction among different types of communications data, but instead the country’s Criminal Procedure Code allows a more protective standard by requiring a prior warrant in all proceedings that affect, deprive, or restrict an accused or a third-party’s constitutional privacy rights. Since 2017, Derechos Digitales’ Who Defends Your Data reports have been calling on Chilean companies to commit to the most protective interpretation of legal standards concerning communications data disclosures including subscriber data.

In early 2020, Chile's Prosecutor’s Office sought to obtain all mobile phone numbers that had connected to antennas in Santiago’s subway stations, where fires marked the beginning of the country's 2019 social uprising. By obtaining the mobile phone numbers, it would be possible to identify their owners. Most of the ISPs did not comply with the prosecutor’s direct request without a judicial examination. This case is a clear demonstration of how subscriber information, which unveils a user’s identity linked to specific activities, can provide sensitive details of individuals’ daily lives.

In our submission, we recommend removing Article 7 since it erodes privacy standards even where appropriate protections already exist. This amendment would permit Article 8, mentioned above, to become the primary legal basis by which subscriber data is accessed in cross-border contexts. Article 8 authorizes the requesting authority to submit a production order to the receiving national authority so it can compel local service providers to produce stored subscribers and “traffic data.” Even though Article 8 could also benefit from additional safeguards, such as setting a prior judicial authorization standard, it provides stronger protections than Article 7. Article 8 requires the involvement of the receiving Party’s national authorities that can, applying standards contained in its own national laws, compel the production of subscriber data to the local service provider located in its territory.

Broadening the Scope of Third-Class Protection for Subscriber Information

We wrote about the “second-class” protection still granted to metadata in the region. Latam domestic privacy laws often treat metadata as less worthy of protection compared to the contents of a communication. The Budapest Convention has always promoted the distinction between “traffic data” (equivalent to "metadata") and “subscriber information,” and defines them separately. The Protocol uses this distinction to incorporate a lower level of protection for subscriber information in the context of cross-border requests. But as our 13 Principles on the application of human rights to communications surveillance states, these formalistic categories of data "content," "subscriber information," or "metadata” are no longer appropriate for measuring how  intrusive communications surveillance is for individuals’ private lives and associations. While it has long been agreed that communications content deserves significant protection in law because of its capability to reveal sensitive information, it is now clear that other information arising from communications, including subscriber data and metadata, may reveal deeply sensitive aspects about an individual, and thus deserves similarly robust protections.

Unfortunately, the Convention’s broad definition of subscriber information, which includes IP addresses, exacerbates the Protocol’s callous treatment of this category of information, giving it third-class treatment.

That definition goes beyond, for example, the Brazilian legal definition of subscriber data (dados cadastrais). In fact, IP addresses are considered part of connection and application logs, only disclosed by means of a prior judicial authorization—without the exception for direct requests, referred to above, that may apply to subscriber data. As the Protocol’s Explanatory Report underlines, IP address-related information and other access numbers may be treated as traffic data in some countries, which is why the Second Additional Protocol (Article 7, paragraph 9.b) allows Parties to reserve the right not to apply Article 7 to certain types of access numbers.

However, Article 7, paragraph 7.9.b’s reservation is only possible when disclosing those access numbers through direct cross-border cooperation “would be inconsistent with the fundamental principles of [the] domestic legal system.” But in many Latam legal systems, judicial control and/or the presence of reasonable grounds for communications data aren't clearly spelled out. They often rely on legislation that does not clearly distinguish types of information, case law explicitly addressing only telephone communications, or protective interpretations fostered by companies’ best practices. This situation could not only hamper the use of the reservation clause, when countries eventually sign the Protocol, but may also function as a tool for spreading a general understanding of the scope of “subscriber information,” conveniently served with third-class protection standards.

Conclusion

In their landmark ruling affirming data protection as a fundamental right under the country’s Constitution, Brazilian Supreme Court justices pointed out how changes in our technological landscape demand more cautious treatment of subscriber information. Justice Rosa Weber recalled public telephone directories that contained people’s names, telephone numbers, and addresses, asserting that “what could be done from the publicization of such personal data [a few decades ago] is not comparable to what can be done at the current technological level, where powerful data processing, cross-referencing and filtering technologies allow the formation of extremely detailed individual profiles.” Also mentioning public telephone directories, Justice Cármen Lúcia went as far as to say “this world is over!”—referring to how personal information can now be gathered and analyzed to reveal details of our personal lives.

Article 7 of the Second Protocol is way out of step with the realities of how today’s technology can be used to threaten privacy, relying on an outdated and incorrect assumption, put forward in the Protocol’s Explanatory Report, that subscriber information “does not allow precise conclusions concerning the private lives and daily lives of individuals concerned.”

We hope that CoE’s Parliamentary Assembly removes Article 7 in its entirety from the text of the Protocol, allowing Article 8 to form the primary basis by which user information is disclosed in cross-border contexts. This would allow cross-border cooperation in accessing people’s private information to properly align with advancements in privacy protections being made in national law. That will help to avoid the drift towards third-class protection for user information that can unveil people’s identities  and  link them to specific online activities. Alternatively, if the Parliamentary Assembly retains Article 7, it must be amended to prevent foreign efforts to sidestep domestic safeguards when seeking access to user data.

The Assembly has the opportunity to ensure respect for human rights in cross-border police investigations. Improving the Protocol’s safeguards will carry weight with stakeholders at the national level and influence their decisions to champion, instead of discard, proper privacy safeguards. CoE’s international rules should serve to tip the scale in favor of protecting fundamental rights instead of embracing surveillance tactics strongly lacking human rights protections.

Read more on this topic:

EFF to Council of Europe: Flawed Cross Border Police Surveillance Treaty Needs Fixing—Here Are Our Recommendations to Strengthen Privacy and Data Protections Across the World

Joint Civil Society Comment to the Parliamentary Assembly of the Council of Europe (PACE) on the Second Additional Protocol to the Cybercrime Convention (CETS 185) 

Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty

Global Law Enforcement Convention Weakens Privacy & Human Rights

Joint Civil Society letter for the 6th round of consultation on the Cybercrime Protocol on the first complete draft of the Protocol



Introducing “apkeep,” EFF Threat Lab’s new APK Downloader

Thu, 09/02/2021 - 8:01pm

To track state-sponsored malware and combat the stalkerware of abusive partners, you need tools. Safe, reliable, and fast tools. That’s why EFF’s Threat Lab is proud to announce our very own tool to download Android APK files, apkeep. This enables users to download an Android APK or number of APKs directly from the command-line—either from the Google Play Store (with Google credentials) or from a third-party which mirrors the Play Store apps (no credentials needed).

Written in async Rust, this tool prioritizes simplicity of use, memory safety, reliability, and speed. It has also been compiled to a number of architectures and platforms, including Android’s armv7 and aarch64 platforms to download apps directly from an Android device using Termux. It is available right now for you to use.

In the future, we hope to expand apkeep’s functionality by adding support for the Amazon Appstore, allowing downloads of older app versions, and adding additional architectures.

We are proud to give back to the pool of tools that the application security community has created and that we use every day. We hope our own contribution will provide a useful addition to the toolbox.

Further Details Examples

The simplest example is to download a single APK to the current directory:

apkeep -a com.instagram.android .

This downloads from the default source, APKPure, which does not require credentials. To download directly from the google play store:

apkeep -a com.instagram.android -d GooglePlay -u 'someone@gmail.com' -p somepass .

Refer to USAGE to download multiple APKs in a single run.

Specify a CSV file or individual app ID

You can either specify a CSV file which lists the apps to download, or an individual app ID. If you specify a CSV file and the app ID is not specified by the first column, you’ll have to use the --field option as well. If you have a simple file with one app ID per line, you can just treat it as a CSV with a single field.

New Texas Abortion Law Likely to Unleash a Torrent of Lawsuits Against Online Education, Advocacy and Other Speech

Thu, 09/02/2021 - 2:47pm

In addition to the drastic restrictions it places on a woman’s reproductive and medical care rights, the new Texas abortion law, SB8, will have devastating effects on online speech. 

The law creates a cadre of bounty hunters who can use the courts to punish and silence anyone whose online advocacy, education, and other speech about abortion draws their ire. It will undoubtedly lead to a torrent of private lawsuits against online speakers who publish information about abortion rights and access in Texas, with little regard for the merits of those lawsuits or the First Amendment protections accorded to the speech. Individuals and organizations providing basic educational resources, sharing information, identifying locations of clinics, arranging rides and escorts, fundraising to support reproductive rights, or simply encouraging women to consider all their options—now have to consider the risk that they might be sued for merely speaking. The result will be a chilling effect on speech and a litigation cudgel that will be used to silence those who seek to give women truthful information about their reproductive options. 

We will quickly see the emergence of anti-choice trolls: lawyers and plaintiffs dedicated to using the courts to extort money from a wide variety of speakers supporting reproductive rights.

SB8, also known as the Texas Heartbeat Act, encourages private persons to file lawsuits against anyone who “knowingly engages in conduct that aids or abets the performance or inducement of an abortion.” It doesn’t matter whether that person “knew or should have known that the abortion would be performed or induced in violation of the law,” that is, the law’s new and broadly expansive definition of illegal abortion. And you can be liable even if you simply intend to help, regardless, apparently, of whether an illegal abortion actually resulted from your assistance.  

And although you may defend a lawsuit if you believed the doctor performing the abortion complied with the law, it is really hard to do so. You must prove that you conducted a “reasonable investigation,” and as a result “reasonably believed” that the doctor was following the law. That’s a lot to do before you simply post something to the internet, and of course you will probably have to hire a lawyer to help you do it.  

SB8 is a “bounty law”: it doesn’t just allow these lawsuits, it provides a significant financial incentive to file them. It guarantees that a person who files and wins such a lawsuit will receive at least $10,000 for each abortion that the speech “aided or abetted,” plus their costs and attorney’s fees. At the same time, SB8 may often shield these bounty hunters from having to pay the defendant’s legal costs should they lose. This removes a key financial disincentive they might have had against bringing meritless lawsuits. 

Moreover, lawsuits may be filed up to six years after the purported “aiding and abetting” occurred. And the law allows for retroactive liability: you can be liable even if your “aiding and abetting” conduct was legal when you did it, if a later court decision changes the rules. Together this creates a ticking time bomb for anyone who dares to say anything that educates the public about, or even discusses, abortion online.

Given this legal structure, and the law’s vast application, there is no doubt that we will quickly see the emergence of anti-choice trolls: lawyers and plaintiffs dedicated to using the courts to extort money from a wide variety of speakers supporting reproductive rights.

And unfortunately, it’s not clear when speech encouraging someone to or instructing them how to commit a crime rises to the level of “aiding and abetting” unprotected by the First Amendment. Under the leading case on the issue, it is a fact-intensive analysis, which means that defending the case on First amendment grounds may be arduous and expensive. 

The result of all of this is the classic chilling effect: many would-be speakers will choose not to speak at all for fear of having to defend even the meritless lawsuits that SB8 encourages. And many speakers will choose to take down their speech if merely threatened with a lawsuit, rather than risk the law’s penalties if they lose or take on the burdens of a fact-intensive case even if they were likely to win it. 

The law does include an empty clause providing that it may not be “construed to impose liability on any speech or conduct protected by the First Amendment of the United States Constitution, as made applicable to the states through the United States Supreme Court’s interpretation of the Fourteenth Amendment of the United States Constitution.” While that sounds nice, it offers no real protection—you can already raise the First Amendment in any case, and you don’t need the Texas legislature to give you permission. Rather, that clause is included to try to insulate the law from a facial First Amendment challenge—a challenge to the mere existence of the law rather than its use against a specific person. In other words, the drafters are hoping to ensure that, even if the law is unconstitutional—which it is—each individual plaintiff will have to raise the First Amendment issues on their own, and bear the exorbitant costs—both financial and otherwise—of having to defend the lawsuit in the first place.

One existing free speech bulwark—47 U.S.C. § 230 (“Section 230”)—will provide some protection here, at least for the online intermediaries upon which many speakers depend. Section 230 immunizes online intermediaries from state law liability arising from the speech of their users, so it provides a way for online platforms and other services to get early dismissals of lawsuits against them based on their hosting of user speech. So although a user will still have to fully defend a lawsuit arising, for example, from posting clinic hours online, the platform they used to share that information will not. That is important, because without that protection, many platforms would preemptively take down abortion-related speech for fear of having to defend these lawsuits themselves. As a result, even a strong-willed abortion advocate willing to risk the burdens of litigation in order to defend their right to speak will find their speech limited if weak-kneed platforms refuse to publish it. This is exactly the way Section 230 is designed to work: to reduce the likelihood that platforms will censor in order to protect themselves from legal liability, and to enable speakers to make their own decisions about what to say and what risks to bear with their speech. 

But a powerful and dangerous chilling effect remains for users. Texas’s anti-abortion law is an attack on many fundamental rights, including the First Amendment rights to advocate for abortion rights, to provide basic educational information, and to counsel those considering reproductive decisions. We will keep a close eye on the lawsuits the law spurs and the chilling effects that accompany them. If you experience such censorship, please contact info@eff.org.

Victory! Federal Trade Commission Bans Stalkerware Company from Conducting Business

Wed, 09/01/2021 - 7:50pm

In a major victory in our campaign to stop stalkerware, the Federal Trade Commission (FTC) today banned the Android app company Support King and its CEO Scott Zuckerman, developers of SpyFone, from the surveillance business. The stalkerware app secretly “harvested and shared data on people’s physical movements, phone use and online activities through a hidden device hack,” according to the FTC. The app sold real-time access to surveillance, allowing stalkers and domestic abusers to track potential targets of their violence.

EFF applauds this decision by the FTC and the message it sends to those who facilitate by technical means the behavior of stalkers and domestic abusers. For too long, this nascent industry has been allowed to thrive as an underbelly to the much larger and diverse app ecosystem. With the FTC now turning its focus to this industry, victims of stalkerware can begin to find solace in the fact that regulators are beginning to take their concerns seriously.

The FTC case against Support King is the first to outright ban a stalkerware company and comes two years after EFF and its Director of Cybersecurity Eva Galperin launched the Coalition Against Stalkerware to unite and mobilize security software companies and advocates for domestic abuse victims in actions to combat and shut down malicious stalkerware apps. 

Stalkerware, a type of commercially-available surveillance software, is installed on phones without device users’ knowledge or consent to secretly spy on them. The apps track victims’ locations and allow abusers to read their text messages, monitor phone calls, see photos, videos, and web browsing, and much more. It’s being used all over the world to intimidate, harass, and harm victims, and is a favorite tool for stalkers and abusive spouses or ex-partners.

By using security vulnerabilities that may not yet be known to the public (known as zero-day exploits), stalkerware developers subvert the normal security mechanisms built into the mobile operating system and are able to deeply embed their malicious code into the device.

In a proposed settlement, the FTC bans Support King and Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business” and “to delete any information illegally collected from their stalkerware apps.” The ban sets an important precedent for developers who would consider developing apps that spy on and invade the privacy of their victims. The proposal will be subject to public comment for 30 days after publication in the Federal Register after which the FTC will decide whether to make the proposal final.

In 2019, EFF was one of the ten organizations that founded the Coalition Against Stalkerware, a group of security companies, non-profit organizations, and academic researchers that support survivors of domestic abuse by working together to address technology-enabled abuse and raise awareness about the threat posed by stalkerware. Among its early achievements are an effort to create an industry-wide definition of stalkerware, encouraging research into the proliferation of stalkerware, and convincing anti-virus companies to detect and report the presence of stalkerware as malicious or unwanted programs.

Court Ruling Against Locast Gets the Law Wrong; Lets Giant Broadcast Networks Control Where and How People Watch Free TV

Wed, 09/01/2021 - 2:07pm

In a blow to millions of people who rely on local television broadcasts, a federal court ruled yesterday that the nonprofit TV-streaming service Locast is not protected by an exception to copyright created by Congress to ensure that every American has access to their local stations. Locast is evaluating the ruling and considering its next steps.

The ruling, by a judge in the U.S. District Court for the Southern District of New York, does the opposite of what Congress intended: it threatens people’s access to local news and vital information during a global pandemic and a season of unprecedented natural disasters. What’s more, it treats copyright law not as an engine of innovation benefiting the public but a moat protecting the privileged position of the four giant broadcast networks ABC, CBS, NBC, and Fox.

Locast, operated by Sports Fans Coalition NY, Inc. (SFCNY), enables TV viewers to receive local over-the-air programming—which broadcasters must by law make available for free—using set-top boxes, smartphones, or other devices of their choice. Over three million people use Locast to access local TV, including many who can’t afford cable and can’t pick up their local stations with an antenna. The broadcast networks sued SFCNY, and its founder and chairman David Goodfriend, arguing for the right to control where and how people can watch their free broadcasts.

EFF joined with attorneys at Orrick, Herrington & Sutcliffe to defend SFCNY. We told the court that Locast is protected by an exception to copyright law, put in place by Congress, that enables nonprofits to retransmit broadcast TV, so communities can access local stations that offer news, foreign-language programming, and local sports. Under that exception, there’s no infringement if nonprofits retransmit TV broadcasts without any commercial purpose, and without charge except to cover their costs. Locast viewers can voluntarily donate to SFCNY for this purpose.

Congress made the exemption so that Americans can access local broadcast stations—and expanding such access is exactly what Locast does. But the court accepted a bogus argument by the giant networks, and ruled that user contributions to Locast were “charges” and can’t be used to expand access so more Americans can receive their local channels via streaming. The ruling reads the law in an absurdly narrow way that defeats Congress’s intention to allow nonprofits to step in and provide communities access to broadcast TV, a vital source of local news and cultural programming for millions of people. This matters now more than ever, with communities across the country at risk because of COVID-19, devastating fires, and deadly hurricanes.

Make no mistake, this case demonstrates once again how giant entertainment companies use copyright to control when, where, and how people can receive their local TV broadcasts, and drive people to buy expensive pay-TV services to get their local news and sports. We are disappointed that the court is enabling this callous profiteering that tramples on Congress’s intent to ensure local communities have access to news that’s important to people regardless of their ability to pay. The court made a mistake, and Locast is considering its options.

Pages