EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 32 min ago

VICTORY: You Can Now Make Your Venmo Friends List Private. Here’s How.

Mon, 06/07/2021 - 7:00pm

It took two and a half years and one national security incident, but Venmo did it, folks: users now have privacy settings to hide their friends lists.

EFF first pointed out the problem with Venmo friends lists in early 2019 with our "Fix It Already" campaign. While Venmo offered a setting to make your payments and transactions private, there was no option to hide your friends list. No matter how many settings you tinkered with, Venmo would show your full friends list to anyone else with a Venmo account. That meant an effectively public record of the people you exchange money with regularly, along with whoever the app might have automatically imported from your phone contact list or even your Facebook friends list. The only way to make a friends list “private” was to manually delete friends one at a time; turn off auto-syncing; and, when the app wouldn’t even let users do that, monitor for auto-populated friends and remove them one by one, too.

This public-no-matter-what friends list design was a privacy disaster waiting to happen, and it happened to the President of the United States. Using the app’s search tool and all those public friends lists, Buzzfeed News found President Biden’s account in less than 10 minutes, as well as those of members of the Biden family, senior staffers, and members of Congress.  This appears to have been the last straw for Venmo: after more than two years of effectively ignoring calls from EFF, Mozilla, and others, the company has finally started to roll out privacy settings for friends lists.

As we’ve noted before, this is the bare minimum. Providing more privacy settings options so users can opt-out of the publication of their friends list is a step in the right direction. But what Venmo—and any other payment app—must do next is make privacy the default for transactions and friends lists, not just an option buried in the settings.

In the meantime, follow these steps to lock down your Venmo account:

  1. Tap the three lines in the top right corner of your home screen and select Settings near the bottom. From the settings screen, select Privacy and then Friends List. (If the Friends List option does not appear, try updating your app, restarting it, or restarting your phone.
       

  2. The settings will look like this by default.


  3. Change the privacy setting to Private. If you do not wish to appear in your friends’ own friends lists—after all, they may not set theirs to private—click the toggle off at the bottom. The final result should look like this.


  4. Back on the Privacy settings page, make sure your Default privacy settings look like this: set your default privacy option for all future payments to Private.


  5. Now select Past Transactions.


  6. Select Change All to Private.

  7. Confirm the change and click Change to Private.


  8. Now go all the way back to the main settings page, and select Friends & social.


  9. From here, you may see options to unlink your Venmo account from your Facebook account, Facebook friends list, and phone contact list. (Venmo may not give you all of these options if, for example, you originally signed up for Venmo with your Facebook account.) Click all the toggles off if possible.

    Obviously your specific privacy preferences are up to you, but following the steps above should protect you from the most egregious snafus that the company has caused over the years with its public-by-default—or entirely missing— privacy settings. Although it shouldn't take a national security risk to force a company to focus on privacy, we're glad that Venmo has finally, at last, years later, provided friends list privacy options

Ring Changed How Police Request Door Camera Footage: What it Means and Doesn’t Mean

Mon, 06/07/2021 - 6:55pm

Amazon Ring has announced that it will change the way police can request footage from millions of doorbell cameras in communities across the country. Rather than the current system, in which police can send automatic bulk email requests to individual Ring users in an area of interest up to a square half mile, police will now publicly post their requests to Ring’s accompanying Neighbors app. Users of that app will see a “Request for Assistance” on their feed, unless they opt out of seeing such requests, and then Ring customers in the area of interest (still up to a square half mile) can respond by reviewing and providing their footage. 

Because only a portion of Ring users also are Neighbors users, and some of them may opt out of receiving police requests, this new system may  reduce the number of people who receive police requests, though we wonder whether Ring will now push more of its users to register for the app. 

This new model also may increase transparency over how police officers use and abuse the Ring system, especially as to people of color, immigrants, and protesters. Previously, in order to learn about police requests to Ring users, investigative reporters and civil liberties groups had to file public records requests with police departments--which consumed significant time and often yielded little information from recalcitrant agencies. Through this labor-intensive process, EFF revealed that the Los Angeles Police Department targeted Black Lives Matter protests in May and June 2020 with bulk Ring requests for doorbell camera footage that likely included First Amendment protected activities. Now, users will be able to see every digital request a police department has made to residents for Ring footage by scrolling through a department’s public page on the app. 

But making it easier to monitor historical requests can only do so much. It certainly does not address the larger problem with Ring and Neighbors: the network is predicated on perpetuating irrational fear of neighborhood crime, often yielding disproportionate scrutiny against people of color, all for the purposes of selling more cameras. Ring does so through police partnerships, which now encompass 1 in every 10 police departments in the United States. At their core, these partnerships facilitate bulk requests from police officers to Ring customers for their camera footage, built on a growing Ring surveillance network of millions of public-facing cameras. EFF adamantly opposes these Ring-police partnerships and advocates for their dissolution.

Nor does new transparency about bulk officer-to-resident requests through Ring erase the long history of secrecy about these shady partnerships. For example, Amazon has provided free Ring cameras to police, and limited what police were allowed to say about Ring, even including the existence of the partnership. 

Notably, Amazon has moved Ring functionality to its Neighbors app. Neighbors is a problematic technology. Like its peers Nextdoor and Citizen, it encourages its users to report supposedly suspicious people--often resulting in racially biased posts that endanger innocent residents and passersby. 

Ring’s small reforms invite  bigger questions: Why does a customer-focused technology company need to develop and maintain a feature for law enforcement in the first place? Why must Ring and other technology companies continue to offer police free features to facilitate surveillance and the transfer of information from users to the government? 

Here’s some free advice for Ring: Want to make your product less harmful to vulnerable populations? Stop facilitating their surveillance and harassment at the hands of police. 

Maryland and Montana Pass the Nation’s First Laws Restricting Law Enforcement Access to Genetic Genealogy Databases

Mon, 06/07/2021 - 5:42pm

Last week, Maryland and Montana passed laws requiring judicial authorization to search consumer DNA databases in criminal investigations. These are welcome and important restrictions on forensic genetic genealogy searching (FGGS)—a law enforcement technique that has become increasingly common and impacts the genetic privacy of millions of Americans.

Consumer personal genetics companies like Ancestry, 23andMe, GEDMatch, and FamilyTreeDNA host the DNA data of millions of Americans. The data users share with consumer DNA databases is extensive and revealing. The genetic profiles stored in those databases are made up of more than half a million single nucleotide polymorphisms (“SNPs”) that span the entirety of the human genome. These profiles not only can reveal family members and distant ancestors, they can divulge a person’s propensity for various diseases like breast cancer or Alzheimer’s and can even predict addiction and drug response. Some researchers have even claimed that human behaviors such as aggression can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply. Companies can even create images of what they think a person looks like based just on their genetic data. 

Law enforcement regularly accesses this intensely private and sensitive data too, using FGGS. Just like consumers, officers take advantage of the genetics companies’ powerful algorithms to try to identify familial relationships between an unknown forensic sample and existing site users. These familial relationships can then lead law enforcement to possible suspects. However, in using FGGS, officers are rifling through the genetic data of millions of Americans who are not suspects in the investigation and have no connection to the crime whatsoever. This is not how criminal investigations are supposed to work. As we have argued before, the language of the Fourth Amendment, which requires probable cause for every search and particularity for every warrant, precludes dragnet warrantless searches like these. A technique’s usefulness for law enforcement does not outweigh people’s privacy interests in their genetic data.

Up until now, nothing has prevented law enforcement from rifling through the genetic data of millions of unsuspecting and innocent Americans. The new laws in Maryland and Montana should change that.

Here’s What the New Laws Require: Maryland:

Maryland’s law is very broad and covers much more than FGGS. It requires judicial authorization for FGGS and places strict limits on when and under what conditions law enforcement officers may conduct FGGS. For example, FGGS may only be used in cases of rape, murder, felony sexual offenses, and criminal acts that present “a substantial and ongoing threat to public safety or national security.” Before officers can pursue FGGS, they must certify to the court that they have already tried searching existing, state-run criminal DNA databases like CODIS, that they have pursued other reasonable investigative leads, and that those searches have failed to identify anyone. And FGGS may only be used with consumer databases that have provided explicit notice to users about law enforcement searches and sought consent from those users. These meaningful restrictions ensure that FGGS does not become the default first search conducted by law enforcement and limits its use to crimes that society has already determined are the most serious.

The Maryland law regulates other important aspects of genetic investigations as well. For example, it places strict limits on and requires judicial oversight for the covert collection of DNA samples from both potential suspects and their genetic relatives, something we have challenged several times in the courts. This is a necessary protection because officers frequently and secretly collect and search DNA from free people in criminal investigations involving FGGS. We cannot avoid shedding carbon copies of our DNA, and we leave it behind on items in our trash, an envelope we lick to seal, or even the chairs we sit on, making it easy for law enforcement to collect our DNA without our knowledge. We have argued that the Fourth Amendment precludes covert collection, but until courts have a chance to address this issue, statutory protections are an important way to reinforce our constitutional rights.

The new Maryland law also mandates informed consent in writing before officers can collect DNA samples from third parties and precludes covert collection from someone who has refused to provide a sample. It requires destruction of DNA samples and data when an investigation ends. It also requires licensing for labs that conduct DNA sequencing used for FGGS and for individuals who perform genetic genealogy. It creates criminal penalties for violating the statute and a private right of action with liquidated damages so that people can enforce the law through the courts. It requires the governor’s office to report annually and publicly on law enforcement use of FGGS and covert collection. Finally, it states explicitly that criminal defendants may use the technique as well to support their defense (but places similar restrictions on use). All of these requirements will help to rein in the unregulated use of FGGS.

Montana:

In contrast to Maryland’s 16-page comprehensive statute, Montana’s is only two pages and less clearly drafted. However, it still offers important protections for people identified through FGGS.

Montana’s statute requires a warrant before government entities can use familial DNA or partial match search techniques on either consumer DNA databases or the state’s criminal DNA identification index. 1 The statute defines a “familial DNA search” broadly as a search that uses “specialized software to detect and statistically rank a list of potential candidates in the DNA database who may be a close biological relative to the unknown individual contributing the evidence DNA profile.” This is exactly what consumer genetic genealogy sites like GEDmatch and FamilyTree DNA’s software does. The statute also applies to companies like Ancestry and 23andMe that do their own genotyping in-house, because it covers “lineage testing,” which it defines as “[SNP] genotyping to generate results related to a person's ancestry and genetic predisposition to health-related topics.”

The statute also requires a warrant for other kinds of searches of consumer DNA databases, like when law enforcement is looking for a direct user of the consumer DNA database. Unfortunately, though, the statute includes a carve-out to this warrant requirement if “the consumer whose information is sought previously waived the consumer’s right to privacy,” but does not explain how an individual consumer may waive their privacy rights. There is no carve out for familial searches.

By creating stronger protections for people who are identified through familial searches but who haven’t uploaded their own data, Montana’s statute recognizes an important point that we and others have been making for a few years—you cannot waive your privacy rights in your genetic information when someone else has control over whether your shared DNA ends up in a consumer database.

It is unfortunate, though, that this seems to come at the expense of existing users of consumer genetics services. Montana should have extended warrant protections to everyone whose DNA data ends up in a consumer DNA database. A bright line rule would have been better for privacy and perhaps easier for law enforcement to implement since it is unclear how law enforcement will determine whether someone waived their privacy rights in advance of a search. 

We Need More Legal Restrictions on FGGS

We need more states—and the federal government— to pass restrictions on genetic genealogy searches. Some companies, like Ancestry and 23andMe prevent direct access to their databases and have fought law enforcement demands for data. However, other companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country. By 2018, FGGS had already been used in at least 200 cases. Officers never sought a warrant or any legal process at all in any of those cases because there were no state or federal laws explicitly requiring them to do so. 

While EFF has argued FGG searches are dragnets and should never be allowed—even with a warrant, Montana and Maryland’s laws are still a step in the right direction, especially where, as in Maryland, an outright ban previously failed. Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it.

  • 1.  The restriction on warrantless familial and partial match searching of government-run criminal DNA databases is particularly welcome. Most states do not explicitly limit these searches (Maryland is an exception and explicitly bans this practice), even though many, including a federal government working group, have questioned their efficacy.

Security Tips for Online LGBTQ+ Dating

Mon, 06/07/2021 - 3:58pm

Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+, they can be harmed by oppressive governments, bigoted law enforcement, and individuals with hateful beliefs. So here’s some advice for staying safe while online dating as an LGBTQIA+ person:

Step One: Threat Modeling

The first step is making a personal digital security plan. You should start with looking at your own security situation from a high level. This is often called threat modeling and risk assessment. Simply put, this is taking inventory of the things you want to protect and what adversaries or risks you might be facing. In the context of online dating, your protected assets might include details about your sexuality, gender identity, contacts of friends and family, HIV status, political affiliation, etc. 

Let's say that you want to join a dating app, chat over the app, exchange pictures, meet someone safely, and avoid stalking and harassment. Threat modeling is how you assess what you want to protect and from whom. 

We touch in this post on a few considerations for people in countries where homosexuality is criminalized, which may include targeted harassment by law enforcement. But this guide is by no means comprehensive. Refer to materials by LGBTQ+ organizations in those countries for specific tips on your threat model.

Securely Setting Up Dating Profiles

When making a new dating account, make sure to use a unique email address to register. Often you will need to verify the registration process through that email account, so it’s likely you’ll need to provide a legitimate address. Consider creating an email address strictly for that dating app. Oftentimes there are ways to discover if an email address is associated with an account on a given platform, so using a unique one can prevent others from potentially knowing you’re on that app. Alternatively, you might use a disposable temporary email address service. But if you do so, keep in mind that you won’t be able to access it in the future, such as if you need to recover a locked account. 

The same logic applies to using phone numbers when registering for a new dating account. Consider using a temporary or disposable phone number. While this can be more difficult than using your regular phone number, there are plenty of free and paid virtual telephone services available that offer secondary phone numbers. For example, Google Voice is a service that offers a secondary phone number attached to your normal one, registered through a Google account. If your higher security priority is to abstain from giving data to a privacy-invasive company like Google, a “burner” pay-as-you-go phone service like Mint Mobile is worth checking out. 

When choosing profile photos, be mindful of images that might accidentally give away your location or identity. Even the smallest clues in an image can expose its location. Some people use pictures with relatively empty backgrounds, or taken in places they don’t go to regularly.

Make sure to check out the privacy and security sections in your profile settings menu. You can usually configure how others can find you, whether you’re visible to others, whether location services are on (that is, when an app is able to track your location through your phone), and more. Turn off anything that gives away your location or other information, and later you can selectively decide which features to reactivate, if any. More mobile phone privacy information can be found on this Surveillance Self Defense guide.

Communicating via Phone, Email, or In-App Messaging

Generally speaking, using an end-to-end encrypted messaging service is the best way to go for secure texting. For some options like Signal, or Whatsapp, you may be able to use a secondary phone number to keep your “real” phone number private.

For phone calls, you may want to use a virtual phone service that allows you to screen calls, use secondary phone numbers, block numbers, and more. These aren’t always free, but research can bring up “freemium” versions that give you free access to limited features.

Be wary of messaging features within apps that offer deletion options or disappearing messages, like Snapchat. Many images and messages sent through these apps are never truly deleted, and may still exist on the company’s servers. And even if you send someone a message that self-deletes or notifies you if they take a screenshot, that person can still take a picture of it with another device, bypassing any notifications. Also, Snapchat has a map feature that shows live public posts around the world as they go up. With diligence, someone could determine your location by tracing any public posts you make through this feature.

Sharing Photos

If the person you’re chatting with has earned a bit of your trust and you want to share pictures with them, consider not just what they can see about you in the image itself, as described above, but also what they can learn about you by examining data embedded in the file.

EXIF metadata lives inside an image file and describes the geolocation it was taken, the device it was made with, the date, and more. Although some apps have gotten better at automatically withholding EXIF data from uploaded images, you still should manually remove it from any images you share with others, especially if you send them directly over phone messaging. 

One quick way is to  send the image to yourself on Signal messenger, which automatically strips EXIF data. When you search for your own name in contacts, a feature will come up with “Note to Self” where you have a chat screen to send things to yourself:

Screenshot of Signal's Note to Self feature

Before sharing your photo, you can verify the results by using a tool to view EXIF data on an image file, before and after removing EXIF data.

For some people, it might be valuable to use a watermarking app to add your username or some kind of signature to images. This can verify who you are to others and prevent anyone from using your images to impersonate you. There are many free and mostly-free options in iPhone and Android app stores. Consider a lightweight version that allows you to easily place text on an image and lets you screenshot the result. Keep in mind that watermarking a picture is a quick way to identify yourself, which in itself is a trade-off.

watermark example overlaid on an image of the lgbtq+ pride flag

Sexting Safely

Much of what we’ve already gone over will step up your security when it comes to sexting, but here are some extra precautions:

Seek clearly communicated consent between you and romantic partners about how intimate pictures can be shared or saved. This is great non-technical security at work. If anyone else is in an image you want to share, make sure you have their consent as well. Also, be thoughtful as to whether or not to include your face in any images you share.

As we mentioned above, your location can be determined by public posts you make and Snapchat’s map application.

For video chatting with a partner, consider a service like Jitsi that allows temporary rooms, no registration, and is designed with privacy in mind. Many services are not built with privacy in mind, and require account registration, for example. 

Meeting Someone AFK

Say you’ve taken all the above precautions, someone online has gained your trust, and you want to meet them away-from-keyboard and in real life. Always meet first somewhere public and occupied with other people. Even better, meet in an area more likely to be accepting of LGBTQIA+ people. Tell a friend beforehand all the details about where you’re going, who you are meeting, and a given time that you promise to check back in with them that you’re ok.

If you’re living in one of the 69 countries where homosexuality is illegal and criminalized, make sure to check in with local advocacy groups about your area. Knowing your rights as a citizen will help keep you safe if you’re stopped by law enforcement.

Privacy and Security is a Group Effort

Although the world is often hostile to non-normative expressions of love and identity, your personal security, online and off, is much better supported when you include the help of others that you trust. Keeping each other safe, accountable, and cared for gets easier when you have more people involved. A network is always stronger when every node on it is fortified against potential threats. 

Happy Pride Month—keep each other safe.

Global Law Enforcement Convention Weakens Privacy & Human Rights

Mon, 06/07/2021 - 10:25am

The Council of Europe Cybercrime Committee's (T-CY) recent decision to approve new international rules for law enforcement access to user data without strong privacy protections is a blow for global human rights in the digital age. The final version of the draft Second Additional Protocol to the Council of Europe’s (CoE) widely adopted Budapest Cybercrime Convention, approved by the T-CY drafting committee on May 28th, places few limits on law enforcement data collection. As such, the Protocol can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone's right to privacy and free expression across the globe. 

The Protocol now heads to members of CoE's Parliamentary Committee (PACE) for their opinion. PACE’s Committee on Legal Affairs and Human Rights can recommend further amendments, and decide which ones will be adopted by the Standing Committee or the Plenary. Then, the Council of Ministers will vote on whether to integrate PACE's recommendations into the final text. The CoE’s plan is to finalize the Protocol's adoption by November. If adopted, the Protocol will be open for signatures to any country that has signed the Budapest Convention sometime before 2022.

The next step for countries is at the signature stage when they will ask to reserve the right not to abide by certain provisions n the Protocol, especially Article 7 on direct cooperation between law enforcement and companies holding user data. 

If countries sign the Protocol as it stands and in its entirety, it will reshape how state police access digital data from Internet companies based in other countries by prioritizing law enforcement demands, sidestepping judicial oversight, and lowering the bar for privacy safeguards. 

CoE’s Historical Commitment to Transparency Conspicuously Absent

While transparency and a strong commitment to engaging with external stakeholders have been hallmarks of CoE treaty development, the new Protocol’s drafting process lacked robust engagement with civil society. The T-CY adopted internal rules that have fostered a largely opaque process, led by public safety and law enforcement officials. T-CY’s periodic consultations with external stakeholders and the public have lacked important details, offered short response timelines, and failed to meaningfully address criticisms.

In 2018, nearly 100 public interest groups called on the CoE to allow for expert civil society input on the Protocol’s development. In 2019, the European Data Protection Board (EDPB) similarly called on T-CY to ensure “early and more proactive involvement of data protection authorities” in the drafting process, a call it felt the need to reiterate earlier this year. And when presenting the Protocol’s draft text for final public comment, T-CY provided only 2.5 weeks, a timeframe that the EDPB noted “does not allow for a timely and in-depth analysis” from stakeholders. That version of the Protocol also failed to include the explanatory text for the data protection safeguards, which was only published later, in the final version of May 28, without public consultation. Even other branches of the CoE, such as its data protection committee, have found it difficult to provide meaningful input under these conditions. 

Last week, over 40 civil society organizations called on CoE to provide an additional opportunity to comment on the final text of the Protocol. The Protocol aims to set a new global standard across countries with widely varying commitments to privacy and human rights. Meaningful input from external stakeholders including digital rights organizations and privacy regulators is essential. Unfortunately, CoE refused and will likely vote to open the Protocol for state signatures starting in November.

With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.

Eroding Global Protection for Online Anonymity 

The new Protocol provides few safeguards for online anonymity, posing a threat to the safety of activists, dissidents, journalists, and the free expression rights of everyday people who go online to comment on and criticize politicians and governments. When Internet companies turn subscriber information over to law enforcement, the real-world consequences can be dire. Anonymity also plays an important role in facilitating opinion and expression online and is necessary for activists and protestors around the world. Yet the new Protocol fails to acknowledge the important privacy interests it places in jeopardy and, by ensuring most of its safeguards are optional, permits police access to sensitive personal data without systematic judicial supervision. 

As a starting point, the new Protocol’s explanatory text claims that: ”subscriber information … does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” deeming it less intrusive than other categories of data.  

This characterization is directly at odds with growing recognition that police frequently use subscriber data access to identify deeply private anonymous communications and activity. Indeed, the Court of Justice of the European Union (CJEU) recently held that letting states associate subscriber data with anonymous digital activity can constitute a ‘serious’ interference with privacy. The Protocol’s attempt to paint identification capabilities as non-intrusive even conflicts with CoE’s own European Court of Human Rights (ECtHR). By encoding the opposite conclusion in an international protocol, the new explanatory text can deter future courts from properly recognizing the importance of online anonymity. As the ECtHR held doing so would, “deny the necessary protection to information which might reveal a good deal about the online activity of an individual, including sensitive details of his or her interests, beliefs and intimate lifestyle.”

Articles 7 and 8 of the Protocol in particular adopt intrusive police powers while requiring few safeguards. Under Article 7, states must clear all legal obstacles to “direct cooperation” between local companies and law enforcement. Any privacy laws that prevent Internet companies from voluntarily identifying customers to foreign police without a court order are incompatible with Article 7 and must be amended. “Direct cooperation” is intended to be the primary means of accessing subscriber data, but Article 8 provides a supplementary power to force disclosure from companies that refuse to cooperate. While Article 8 does not require judicial supervision of police, countries with strong privacy protections may continue relying on their own courts when forcing a local service provider to identify customers. Both Articles 7 and 8 also allow countries to screen and refuse any subscriber data demands that might threaten a state’s essential interests. But these screening mechanisms also remain optional, and refusals are to be “strictly limited,” with the need to protect private data invoked only in “exceptional cases.” 

By leaving most privacy and human rights protections to each state’s discretion, Articles 7 and 8 permit access to sensitive identification data under conditions that the ECtHR described as “offer[ing] virtually no protection from arbitrary interference … and no safeguards against abuse by State officials.”

The Protocol’s drafters have resisted calls from civil society and privacy regulators to require some form of judicial supervision in Articles 7 and 8. Some police agencies object to reliance on the courts, arguing that judicial supervision leads to slower results. But systemic involvement of the courts is a critical safeguard when access to sensitive personal data is at stake. The Office of the Privacy Commissioner of Canada put it cogently: “Independent judicial oversight may take time, but it’s indispensable in the specific context of law enforcement investigations.” Incorporating judicial supervision as a minimum threshold for cross-border access is also feasible. Indeed, a majority of states in T-CY’s own survey require prior judicial authorization for at least some forms of subscriber data in their respective national laws. 

At a minimum, the new Protocol text is flawed for its failure to recognize the deeply private nature of anonymous online activity and the serious threat posed to human rights when State officials are allowed open-ended access to identification data. Granting states this access makes the world less free and seriously threatens free expression. Article 7’s emphasis on non-judicial ‘cooperation’ between police and Internet companies poses a particularly insidious risk, and must not form part of the final adopted Convention.

Imposing Optional Privacy Standards

Article 14, which was recently publicized for the first time, is intended to provide detailed safeguards for personal information. Many of these protections are important, imposing limits on the treatment of sensitive data, the retention of personal data, and the use of personal data in automated decision-making, particularly in countries without data protection laws. The detailed protections are complex, and civil society groups continue to unpack their full legal impact. That being said, some shortcomings are immediately evident.

Some of Article 14’s protections actively undermine privacy—for example, paragraph 14.2.a prohibits signatories from imposing any additional “generic data protection conditions” when limiting the use of personal data. Paragraph 14.1.d also strictly limits when a country’s data protection laws can prevent law enforcement-driven personal data transfers to another country. 

More generally, and in stark contrast to the Protocol’s lawful access obligations, the detailed privacy safeguards encoded in Article 14 are not mandatory and can be ignored if countries have other arrangements in place (Article 14.1). States can rely on a wide variety of agreements to bypass the Article 14 protections. The OECD is currently negotiating an agreement that might systematically displace the Article 14 protections and, under the United States Clarifying Lawful Overseas Use of Data (CLOUD) Act, the U.S. executive branch can enter into “agreements” with other states to facilitate law enforcement transfers. Paragraph 14.1.c even contemplates informal agreements that are neither binding, nor even public, meaning that countries can secretly and systematically bypass the Article 14 safeguards. No real obligations are put in place to ensure these alternative arrangements provide an adequate or even sufficient level of privacy protection. States can therefore rely on the Protocol’s law enforcement powers while using side agreements to bypass its privacy protections, a particularly troubling development given the low data protection standards of many anticipated signatories. 

The Article 14 protections are also problematic because they appear to fall short of the minimum data protection that the CJEU has required. The full list of protections in Article 14, for example, resembles that inserted by the European Commission into its ‘Privacy Shield’ agreement. Internet companies relied upon the Privacy Shield to facilitate economic transfers of personal data from the European Union (EU) to the United States until the CJEU invalidated the agreement in 2020, finding its privacy protections and remedies insufficient. Similarly, clause 14.6 limits the use of personal data in purely automated decision-making systems that will have significant adverse effects on relevant individual interests. But the CJEU has also found that an international agreement for transferring air passenger data to Canada for public safety objectives was inconsistent with EU data protection guarantees despite the inclusion of a similar provision.

Conclusion

These and other substantive problems with the Protocol are concerning. Cross-border data access is rapidly becoming common in even routine criminal investigations, as every aspect of our lives continues its steady migration to the digital world. Instead of baking robust human rights and privacy protections into cross-border investigations, the Protocol discourages court oversight, renders most of its safeguards optional, and generally weakens privacy and freedom of expression.

Facebook's Policy Shift on Politicians Is a Welcome Step

Mon, 06/07/2021 - 9:13am

We are happy to see the news that Facebook is putting an end to a policy that has long privileged the speech of politicians over that of ordinary users. The policy change, which was announced on Friday by The Verge, is something that EFF has been pushing for since as early as 2019. 

Back then, Facebook executive Nick Clegg, a former politician himself, famously pondered: "Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be." 

Perhaps Clegg had a point—we’ve long said that companies are ineffective arbiters of what the world says—but that hardly justifies holding politicians to a lower standard than the average person. International standards will consider the speaker, but only as one of many factors. For example, the United Nations’ Rabat Plan of Action outlines a six-part threshold test that takes into account “(1) the social and political context, (2) status of the speaker, (3) intent to incite the audience against a target group, (4) content and form of the speech, (5) extent of its dissemination and (6) likelihood of harm, including imminence.” Facebook’s Oversight Board recently endorsed the Plan, as a framework for assessing the removal of posts that may incite hostility or violence.

Facebook has deviated very far from the Rabat standard thanks, in part, to the policy it is finally repudiating. For example, it has banned elected officials from parties disfavored by the U.S. government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government's list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obligated after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.

So this decision is a good step in the right direction. But Facebook has many steps to go, including finally—and publicly—endorsing and implementing the Santa Clara Principles.

But ultimately, the real problem is that Facebook’s policy choices have so much power in the first place. It’s worth noting that this move coincides with a massive effort to persuade the U.S. Congress to impose new regulations that are likely to entrench Facebook power over free expression in the U.S. and around the world. If users, activists and, yes, politicians want real progress in defending free expression, we must fight for a world where changes in Facebook’s community standards don’t merit headlines at all—because they just don’t matter that much.

 



Pages