Civil Society Groups Seek More Time to Review, Comment on Rushed Global Treaty for Intrusive Cross Border Police Powers
Electronic Frontier Foundation (EFF), European Digital Rights (EDRi), and 40 other civil society organizations urged the Council of Europe’s Parliament Assembly and Committee of Ministers to allow more time for them to provide much-needed analysis and feedback on the flawed cross border police surveillance treaty its cybercrime committee rushed to approve without adequate privacy safeguards.
Digital and human rights groups were largely sidelined during the drafting process of the Second Additional Protocol to the Budapest Convention, an international treaty that will establish global procedures for law enforcement in one country to access personal user data from technology companies in other countries. The CoE Cybercrime Committee (T-CY)—which oversees the Budapest Convention—adopted in 2017, as work on the police powers treaty began, internal rules that fostered a narrower range of participants for the drafting of this new Protocol.
The process has been largely opaque, led by public safety and law enforcement officials. And T-CY’s periodic consultations with civil society and the public have been criticized for their lack of detail, their short response timelines, and the lack of knowledge about countries' deliberation on these issues. The T-CY rushed approval of the text on May 28th, signing off on provisions that put few limitations and provide little oversight on police access to sensitive user data held by Internet companies around the world.
The Protocol now heads to the Council of Europe Parliamentary Assembly (PACE) Committee on Legal Affairs and Human Rights, which can recommend further amendments. We hope the PACE will hear civil society’s privacy concerns and issue an opinion addressing the lack of adequate data protection safeguards.
In a letter, dated March 31st, to PACE President Rik Daems and Chair of the Committee of Ministers Péter Szijjártó, digital and human rights groups said the treaty will likely be used extensively, with far-reaching implications on the security and privacy of people everywhere. It is imperative that fundamental rights guaranteed in the European Convention on Human Rights and in other agreements are not sidestepped in favor of law enforcement access to user data that is free of judicial oversight and strong privacy protections. The CoE’s plan is to finalize the Protocol's adoption by November and begin accepting signatures from countries sometime before 2022.
We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement,” EFF and its allies said in the letter. “The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments.”
In 2018 EFF, along with 93 civil society organizations from across the globe, asked the TC-Y to invite civil society as experts in the drafting plenary meetings, as is customary in other Council of Europe Committee sessions. The goal was for the experts to listen to Member States opinions and build on those discussions. But we could not work towards this goal since we were not invited to observe the drafting process. While EFF has participated in every public consultation of the TC-Y process since our 2018 coalition letter, the level of participation allowed has failed to comply with meaningful multi-stakeholder principles of transparency, inclusion and accountability. As Tamir Israel (CIPPIC) and Katitza Rodriguez (EFF) pointed out in their analysis of the Protocol:
With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.
The full text of the letter:
Re: Ensuring Meaningful Consultation in Cybercrime Negotiations
We, the undersigned individuals and organizations, write to ask for a meaningful opportunity to give the final draft text of the proposed second additional protocol to Convention 185, the Budapest Cybercrime Convention, the full and detailed consideration which it deserves. We specifically ask that you provide external stakeholders further opportunity to comment on the significant changes introduced to the text on the eve of the final consultation round ending on 6th May, 2021.
The Second Additional Protocol aims to standardise cross-border access by law enforcement authorities to electronic personal data. While competing initiatives are also underway at the United Nations and the OECD, the draft Protocol has the potential to become the global standard for such cross-border access, not least because of the large number of states which have already ratified the principal Convention. In these circumstances, it is imperative that the Protocol should lay down adequate standards for the protection of fundamental rights.
Furthermore, the initiative comes at a time when even routine criminal investigations increasingly include cross-border investigative elements and, in consequence, the protocol is likely to be used widely. The protocol therefore assumes great significance in setting international standards, and is likely to be used extensively, with far-reaching implications for privacy and human rights around the world. It is important that its terms are carefully considered and ensure a proportionate balance between the objective of securing or recovering data for the purposes of law enforcement and the protection of fundamental rights guaranteed in the European Convention on Human Rights and in other relevant national and international instruments.
In light of the importance of this initiative, many of us have been following this process closely and have participated actively, including at the Octopus Conference in Strasbourg in November, 2019 and the most recent and final consultation round which ended on 6th May, 2021.
Although many of us were able to engage meaningfully with the text as it stood in past consultation rounds, it is significant that these earlier iterations of the text were incomplete and lacked provisions to protect the privacy of personal data. In the event, the complete text of the draft Protocol was not publicly available before 12th April, 2021. The complete draft text introduces a number of significant alterations, most notably the inclusion of Article 14, which added for the first time proposed minimum standards for privacy and data protection. While external stakeholders were previously notified that these provisions were under active consideration and would be published in due course, the publication of the revised draft on 12th April offered the first opportunity to examine these provisions and consider other elements of the Protocol in the full light of these promised protections.
We were particularly pleased to see the addition of Article 14, and welcome its important underlying intent—to balance law enforcement objectives with fundamental rights. However, the manner in which this is done is, of necessity, complex and intricate, and, even on a cursory preliminary examination, it is apparent that there are elements of the article which require careful and thoughtful scrutiny, in the light of which might be capable of improvement.
As a number of stakeholders has noted, the latest (and final) consultation window was too short. It is essential that adequate time is afforded to allow a meaningful analysis of this provision and that all interested parties be given a proper chance to comment. We believe that such continued engagement can serve only to improve the text.
The introduction of Article 14 is particularly detailed and transformative in its impact on the entirety of the draft Protocol. Keeping in mind the multiple national systems potentially impacted by the draft Protocol, providing meaningful feedback on this long anticipated set of safeguards within the comment window has proven extremely difficult for civil society groups, data protection authorities and a wide range of other concerned experts.
Complicating our analysis further are gaps in the Explanatory Report accompanying the draft Protocol. We acknowledge that the Explanatory Report might continue to evolve, even after the Protocol itself is finalised, but the absence of elaboration on a pivotal provision such as Article 14 poses challenges to our understanding of its implications and our resulting ability meaningfully to engage in this important treaty process.
We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement. The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments. Misalignments between Article 14 and existing legal frameworks on data protection such as Convention 108/108+ similarly demand careful scrutiny so that their implications are fully understood.
In these circumstances, we anticipate that the Council will wish to accord the highest priority to ensuring that fundamental rights are adequately safeguarded and that the consultation process is sufficiently robust to instill public confidence in the Protocol across the myriad jurisdictions which are to consider its adoption. The Council will, of course, appreciate that these objectives cannot be achieved without meaningful stakeholder input.
We are anxious to assist the Council in this process. In that regard, constructive stakeholder engagement requires a proper opportunity fully to assess the draft protocol in its entirety, including the many and extensive changes introduced in April 2021. We anticipate that the Council will share this concern, and to that end we respectfully suggest that the proposed text (inclusive of a completed explanatory report) be widely disseminated and that a minimum period of 45 days be set aside for interested stakeholders to submit comments.
We do realise that the T-CY Committee had hoped for an imminent conclusion to the drafting process. That said, adding a few months to a treaty process that has already spanned several years of internal drafting is both necessary and proportionate, particularly when the benefits of doing so will include improved public accountability and legitimacy, a more effective framework for balancing law enforcement objectives with fundamental rights, and a finalised text that reflects the considered input of civil society.
We very much look forward to continuing our engagement with the Council both on this and on future matters.
With best regards,
- Electronic Frontier Foundation (international)
- European Digital Rights (European Union)
- The Council of Bars and Law Societies of Europe (CCBE) (European Union)
- Access Now (International)
- ARTICLE19 (Global)
- ARTICLE19 Brazil and South America
- Association for Progressive Communications (APC)
- Association of Technology, Education, Development, Research and Communication - TEDIC (Paraguay)
- Asociación Colombiana de Usuarios de Internet (Colombia)
- Asociación por los Derechos Civiles (ADC) (Argentina)
- British Columbia Civil Liberties Association (Canada)
- Chaos Computer Club e.V. (Germany)
- Content Development & Intellectual Property (CODE-IP) Trust (Kenya)
- net (Sweden)
- Derechos Digitales (Latinoamérica)
- Digitale Gesellschaft (Germany)
- Digital Rights Ireland (Ireland)
- Danilo Doneda, Director of Cedis/IDP and member of the National Council for Data Protection and Privacy (Brazil)
- Electronic Frontier Finland (Finland)
- works (Austria)
- Fundación Acceso (Centroamérica)
- Fundacion Karisma (Colombia)
- Fundación Huaira (Ecuador)
- Fundación InternetBolivia.org (Bolivia)
- Hiperderecho (Peru)
- Homo Digitalis (Greece)
- Human Rights Watch (international)
- Instituto Panameño de Derecho y Nuevas Tecnologías - IPANDETEC (Central America)
- Instituto Beta: Internet e Democracia - IBIDEM (Brazil)
- Institute for Technology and Society - ITS Rio (Brazil)
- International Civil Liberties Monitoring Group (ICLMG)
- Iuridicium Remedium z.s. (Czech Republic)
- IT-Pol Denmark (Denmark)
- Douwe Korff, Emeritus Professor of International Law, London Metropolitan University
- Laboratório de Políticas Públicas e Internet - LAPIN (Brazil)
- Laura Schertel Mendes, Professor, Brasilia University and Director of Cedis/IDP (Brazil)
- Open Net Korea (Korea)
- OpenMedia (Canada)
- Privacy International (international)
- R3D: Red en Defensa de los Derechos Digitales (México)
- Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic - CIPPIC (Canada)
- Usuarios Digitales (Ecuador)
- org (Netherlands)
- Xnet (Spain)
 See, for example Access Now, comments on the draft 2nd Additional Protocol to the Budapest Convention on Cybercrime, available at: https://rm.coe.int/0900001680a25783; EDPB, contribution to the 6th round of consultations on the draft Second Additional Protocol to the Council of Europe Budapest Convention on Cybercrime, available at: https://edpb.europa.eu/system/files/2021-05/edpb_contribution052021_6throundconsultations_budapestconvention_en.pdf.
 Alessandra Pierucci, Correspondence to Ms. Chloé Berthélémy, dated 17 May 2021; Consultative Committee of the Convention for the Protection of Individuals with Regard to Automated Processing of Personal Data, Directorate General Human Rights and Rule of Law, Opinion on Draft Second Additional Protocol, May 7, 2021, https://rm.coe.int/opinion-of-the-committee-of-convention-108-on-the-draft-second-additio/1680a26489; EDPB, see footnote 1; Joint Civil Society letter, 2 May: available at https://edri.org/wp-content/uploads/2021/05/20210420_LetterCoECyberCrimeProtocol_6thRound.pdf.
It took two and a half years and one national security incident, but Venmo did it, folks: users now have privacy settings to hide their friends lists.
EFF first pointed out the problem with Venmo friends lists in early 2019 with our "Fix It Already" campaign. While Venmo offered a setting to make your payments and transactions private, there was no option to hide your friends list. No matter how many settings you tinkered with, Venmo would show your full friends list to anyone else with a Venmo account. That meant an effectively public record of the people you exchange money with regularly, along with whoever the app might have automatically imported from your phone contact list or even your Facebook friends list. The only way to make a friends list “private” was to manually delete friends one at a time; turn off auto-syncing; and, when the app wouldn’t even let users do that, monitor for auto-populated friends and remove them one by one, too.
This public-no-matter-what friends list design was a privacy disaster waiting to happen, and it happened to the President of the United States. Using the app’s search tool and all those public friends lists, Buzzfeed News found President Biden’s account in less than 10 minutes, as well as those of members of the Biden family, senior staffers, and members of Congress. This appears to have been the last straw for Venmo: after more than two years of effectively ignoring calls from EFF, Mozilla, and others, the company has finally started to roll out privacy settings for friends lists.
As we’ve noted before, this is the bare minimum. Providing more privacy settings options so users can opt-out of the publication of their friends list is a step in the right direction. But what Venmo—and any other payment app—must do next is make privacy the default for transactions and friends lists, not just an option buried in the settings.
In the meantime, follow these steps to lock down your Venmo account:
- Tap the three lines in the top right corner of your home screen and select Settings near the bottom. From the settings screen, select Privacy and then Friends List. (If the Friends List option does not appear, try updating your app, restarting it, or restarting your phone.
- The settings will look like this by default.
- Change the privacy setting to Private. If you do not wish to appear in your friends’ own friends lists—after all, they may not set theirs to private—click the toggle off at the bottom. The final result should look like this.
- Back on the Privacy settings page, make sure your Default privacy settings look like this: set your default privacy option for all future payments to Private.
- Now select Past Transactions.
- Select Change All to Private.
- Confirm the change and click Change to Private.
- Now go all the way back to the main settings page, and select Friends & social.
- From here, you may see options to unlink your Venmo account from your Facebook account, Facebook friends list, and phone contact list. (Venmo may not give you all of these options if, for example, you originally signed up for Venmo with your Facebook account.) Click all the toggles off if possible.
Obviously your specific privacy preferences are up to you, but following the steps above should protect you from the most egregious snafus that the company has caused over the years with its public-by-default—or entirely missing— privacy settings. Although it shouldn't take a national security risk to force a company to focus on privacy, we're glad that Venmo has finally, at last, years later, provided friends list privacy options
Amazon Ring has announced that it will change the way police can request footage from millions of doorbell cameras in communities across the country. Rather than the current system, in which police can send automatic bulk email requests to individual Ring users in an area of interest up to a square half mile, police will now publicly post their requests to Ring’s accompanying Neighbors app. Users of that app will see a “Request for Assistance” on their feed, unless they opt out of seeing such requests, and then Ring customers in the area of interest (still up to a square half mile) can respond by reviewing and providing their footage.
Because only a portion of Ring users also are Neighbors users, and some of them may opt out of receiving police requests, this new system may reduce the number of people who receive police requests, though we wonder whether Ring will now push more of its users to register for the app.
This new model also may increase transparency over how police officers use and abuse the Ring system, especially as to people of color, immigrants, and protesters. Previously, in order to learn about police requests to Ring users, investigative reporters and civil liberties groups had to file public records requests with police departments--which consumed significant time and often yielded little information from recalcitrant agencies. Through this labor-intensive process, EFF revealed that the Los Angeles Police Department targeted Black Lives Matter protests in May and June 2020 with bulk Ring requests for doorbell camera footage that likely included First Amendment protected activities. Now, users will be able to see every digital request a police department has made to residents for Ring footage by scrolling through a department’s public page on the app.
But making it easier to monitor historical requests can only do so much. It certainly does not address the larger problem with Ring and Neighbors: the network is predicated on perpetuating irrational fear of neighborhood crime, often yielding disproportionate scrutiny against people of color, all for the purposes of selling more cameras. Ring does so through police partnerships, which now encompass 1 in every 10 police departments in the United States. At their core, these partnerships facilitate bulk requests from police officers to Ring customers for their camera footage, built on a growing Ring surveillance network of millions of public-facing cameras. EFF adamantly opposes these Ring-police partnerships and advocates for their dissolution.
Nor does new transparency about bulk officer-to-resident requests through Ring erase the long history of secrecy about these shady partnerships. For example, Amazon has provided free Ring cameras to police, and limited what police were allowed to say about Ring, even including the existence of the partnership.
Notably, Amazon has moved Ring functionality to its Neighbors app. Neighbors is a problematic technology. Like its peers Nextdoor and Citizen, it encourages its users to report supposedly suspicious people--often resulting in racially biased posts that endanger innocent residents and passersby.
Ring’s small reforms invite bigger questions: Why does a customer-focused technology company need to develop and maintain a feature for law enforcement in the first place? Why must Ring and other technology companies continue to offer police free features to facilitate surveillance and the transfer of information from users to the government?
Here’s some free advice for Ring: Want to make your product less harmful to vulnerable populations? Stop facilitating their surveillance and harassment at the hands of police.
Maryland and Montana Pass the Nation’s First Laws Restricting Law Enforcement Access to Genetic Genealogy Databases
Last week, Maryland and Montana passed laws requiring judicial authorization to search consumer DNA databases in criminal investigations. These are welcome and important restrictions on forensic genetic genealogy searching (FGGS)—a law enforcement technique that has become increasingly common and impacts the genetic privacy of millions of Americans.
Consumer personal genetics companies like Ancestry, 23andMe, GEDMatch, and FamilyTreeDNA host the DNA data of millions of Americans. The data users share with consumer DNA databases is extensive and revealing. The genetic profiles stored in those databases are made up of more than half a million single nucleotide polymorphisms (“SNPs”) that span the entirety of the human genome. These profiles not only can reveal family members and distant ancestors, they can divulge a person’s propensity for various diseases like breast cancer or Alzheimer’s and can even predict addiction and drug response. Some researchers have even claimed that human behaviors such as aggression can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply. Companies can even create images of what they think a person looks like based just on their genetic data.
Law enforcement regularly accesses this intensely private and sensitive data too, using FGGS. Just like consumers, officers take advantage of the genetics companies’ powerful algorithms to try to identify familial relationships between an unknown forensic sample and existing site users. These familial relationships can then lead law enforcement to possible suspects. However, in using FGGS, officers are rifling through the genetic data of millions of Americans who are not suspects in the investigation and have no connection to the crime whatsoever. This is not how criminal investigations are supposed to work. As we have argued before, the language of the Fourth Amendment, which requires probable cause for every search and particularity for every warrant, precludes dragnet warrantless searches like these. A technique’s usefulness for law enforcement does not outweigh people’s privacy interests in their genetic data.
Up until now, nothing has prevented law enforcement from rifling through the genetic data of millions of unsuspecting and innocent Americans. The new laws in Maryland and Montana should change that.Here’s What the New Laws Require: Maryland:
Maryland’s law is very broad and covers much more than FGGS. It requires judicial authorization for FGGS and places strict limits on when and under what conditions law enforcement officers may conduct FGGS. For example, FGGS may only be used in cases of rape, murder, felony sexual offenses, and criminal acts that present “a substantial and ongoing threat to public safety or national security.” Before officers can pursue FGGS, they must certify to the court that they have already tried searching existing, state-run criminal DNA databases like CODIS, that they have pursued other reasonable investigative leads, and that those searches have failed to identify anyone. And FGGS may only be used with consumer databases that have provided explicit notice to users about law enforcement searches and sought consent from those users. These meaningful restrictions ensure that FGGS does not become the default first search conducted by law enforcement and limits its use to crimes that society has already determined are the most serious.
The Maryland law regulates other important aspects of genetic investigations as well. For example, it places strict limits on and requires judicial oversight for the covert collection of DNA samples from both potential suspects and their genetic relatives, something we have challenged several times in the courts. This is a necessary protection because officers frequently and secretly collect and search DNA from free people in criminal investigations involving FGGS. We cannot avoid shedding carbon copies of our DNA, and we leave it behind on items in our trash, an envelope we lick to seal, or even the chairs we sit on, making it easy for law enforcement to collect our DNA without our knowledge. We have argued that the Fourth Amendment precludes covert collection, but until courts have a chance to address this issue, statutory protections are an important way to reinforce our constitutional rights.
The new Maryland law also mandates informed consent in writing before officers can collect DNA samples from third parties and precludes covert collection from someone who has refused to provide a sample. It requires destruction of DNA samples and data when an investigation ends. It also requires licensing for labs that conduct DNA sequencing used for FGGS and for individuals who perform genetic genealogy. It creates criminal penalties for violating the statute and a private right of action with liquidated damages so that people can enforce the law through the courts. It requires the governor’s office to report annually and publicly on law enforcement use of FGGS and covert collection. Finally, it states explicitly that criminal defendants may use the technique as well to support their defense (but places similar restrictions on use). All of these requirements will help to rein in the unregulated use of FGGS.Montana:
In contrast to Maryland’s 16-page comprehensive statute, Montana’s is only two pages and less clearly drafted. However, it still offers important protections for people identified through FGGS.
Montana’s statute requires a warrant before government entities can use familial DNA or partial match search techniques on either consumer DNA databases or the state’s criminal DNA identification index. 1 The statute defines a “familial DNA search” broadly as a search that uses “specialized software to detect and statistically rank a list of potential candidates in the DNA database who may be a close biological relative to the unknown individual contributing the evidence DNA profile.” This is exactly what consumer genetic genealogy sites like GEDmatch and FamilyTree DNA’s software does. The statute also applies to companies like Ancestry and 23andMe that do their own genotyping in-house, because it covers “lineage testing,” which it defines as “[SNP] genotyping to generate results related to a person's ancestry and genetic predisposition to health-related topics.”
The statute also requires a warrant for other kinds of searches of consumer DNA databases, like when law enforcement is looking for a direct user of the consumer DNA database. Unfortunately, though, the statute includes a carve-out to this warrant requirement if “the consumer whose information is sought previously waived the consumer’s right to privacy,” but does not explain how an individual consumer may waive their privacy rights. There is no carve out for familial searches.
By creating stronger protections for people who are identified through familial searches but who haven’t uploaded their own data, Montana’s statute recognizes an important point that we and others have been making for a few years—you cannot waive your privacy rights in your genetic information when someone else has control over whether your shared DNA ends up in a consumer database.
It is unfortunate, though, that this seems to come at the expense of existing users of consumer genetics services. Montana should have extended warrant protections to everyone whose DNA data ends up in a consumer DNA database. A bright line rule would have been better for privacy and perhaps easier for law enforcement to implement since it is unclear how law enforcement will determine whether someone waived their privacy rights in advance of a search.
We need more states—and the federal government— to pass restrictions on genetic genealogy searches. Some companies, like Ancestry and 23andMe prevent direct access to their databases and have fought law enforcement demands for data. However, other companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country. By 2018, FGGS had already been used in at least 200 cases. Officers never sought a warrant or any legal process at all in any of those cases because there were no state or federal laws explicitly requiring them to do so.
While EFF has argued FGG searches are dragnets and should never be allowed—even with a warrant, Montana and Maryland’s laws are still a step in the right direction, especially where, as in Maryland, an outright ban previously failed. Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it.
- 1. The restriction on warrantless familial and partial match searching of government-run criminal DNA databases is particularly welcome. Most states do not explicitly limit these searches (Maryland is an exception and explicitly bans this practice), even though many, including a federal government working group, have questioned their efficacy.
Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+, they can be harmed by oppressive governments, bigoted law enforcement, and individuals with hateful beliefs. So here’s some advice for staying safe while online dating as an LGBTQIA+ person:Step One: Threat Modeling
The first step is making a personal digital security plan. You should start with looking at your own security situation from a high level. This is often called threat modeling and risk assessment. Simply put, this is taking inventory of the things you want to protect and what adversaries or risks you might be facing. In the context of online dating, your protected assets might include details about your sexuality, gender identity, contacts of friends and family, HIV status, political affiliation, etc.
Let's say that you want to join a dating app, chat over the app, exchange pictures, meet someone safely, and avoid stalking and harassment. Threat modeling is how you assess what you want to protect and from whom.
We touch in this post on a few considerations for people in countries where homosexuality is criminalized, which may include targeted harassment by law enforcement. But this guide is by no means comprehensive. Refer to materials by LGBTQ+ organizations in those countries for specific tips on your threat model.Securely Setting Up Dating Profiles
When making a new dating account, make sure to use a unique email address to register. Often you will need to verify the registration process through that email account, so it’s likely you’ll need to provide a legitimate address. Consider creating an email address strictly for that dating app. Oftentimes there are ways to discover if an email address is associated with an account on a given platform, so using a unique one can prevent others from potentially knowing you’re on that app. Alternatively, you might use a disposable temporary email address service. But if you do so, keep in mind that you won’t be able to access it in the future, such as if you need to recover a locked account.
The same logic applies to using phone numbers when registering for a new dating account. Consider using a temporary or disposable phone number. While this can be more difficult than using your regular phone number, there are plenty of free and paid virtual telephone services available that offer secondary phone numbers. For example, Google Voice is a service that offers a secondary phone number attached to your normal one, registered through a Google account. If your higher security priority is to abstain from giving data to a privacy-invasive company like Google, a “burner” pay-as-you-go phone service like Mint Mobile is worth checking out.
When choosing profile photos, be mindful of images that might accidentally give away your location or identity. Even the smallest clues in an image can expose its location. Some people use pictures with relatively empty backgrounds, or taken in places they don’t go to regularly.
Make sure to check out the privacy and security sections in your profile settings menu. You can usually configure how others can find you, whether you’re visible to others, whether location services are on (that is, when an app is able to track your location through your phone), and more. Turn off anything that gives away your location or other information, and later you can selectively decide which features to reactivate, if any. More mobile phone privacy information can be found on this Surveillance Self Defense guide.Communicating via Phone, Email, or In-App Messaging
Generally speaking, using an end-to-end encrypted messaging service is the best way to go for secure texting. For some options like Signal, or Whatsapp, you may be able to use a secondary phone number to keep your “real” phone number private.
For phone calls, you may want to use a virtual phone service that allows you to screen calls, use secondary phone numbers, block numbers, and more. These aren’t always free, but research can bring up “freemium” versions that give you free access to limited features.
Be wary of messaging features within apps that offer deletion options or disappearing messages, like Snapchat. Many images and messages sent through these apps are never truly deleted, and may still exist on the company’s servers. And even if you send someone a message that self-deletes or notifies you if they take a screenshot, that person can still take a picture of it with another device, bypassing any notifications. Also, Snapchat has a map feature that shows live public posts around the world as they go up. With diligence, someone could determine your location by tracing any public posts you make through this feature.Sharing Photos
If the person you’re chatting with has earned a bit of your trust and you want to share pictures with them, consider not just what they can see about you in the image itself, as described above, but also what they can learn about you by examining data embedded in the file.
EXIF metadata lives inside an image file and describes the geolocation it was taken, the device it was made with, the date, and more. Although some apps have gotten better at automatically withholding EXIF data from uploaded images, you still should manually remove it from any images you share with others, especially if you send them directly over phone messaging.
One quick way is to send the image to yourself on Signal messenger, which automatically strips EXIF data. When you search for your own name in contacts, a feature will come up with “Note to Self” where you have a chat screen to send things to yourself:
For some people, it might be valuable to use a watermarking app to add your username or some kind of signature to images. This can verify who you are to others and prevent anyone from using your images to impersonate you. There are many free and mostly-free options in iPhone and Android app stores. Consider a lightweight version that allows you to easily place text on an image and lets you screenshot the result. Keep in mind that watermarking a picture is a quick way to identify yourself, which in itself is a trade-off.Sexting Safely
Much of what we’ve already gone over will step up your security when it comes to sexting, but here are some extra precautions:
Seek clearly communicated consent between you and romantic partners about how intimate pictures can be shared or saved. This is great non-technical security at work. If anyone else is in an image you want to share, make sure you have their consent as well. Also, be thoughtful as to whether or not to include your face in any images you share.
As we mentioned above, your location can be determined by public posts you make and Snapchat’s map application.
For video chatting with a partner, consider a service like Jitsi that allows temporary rooms, no registration, and is designed with privacy in mind. Many services are not built with privacy in mind, and require account registration, for example.Meeting Someone AFK
Say you’ve taken all the above precautions, someone online has gained your trust, and you want to meet them away-from-keyboard and in real life. Always meet first somewhere public and occupied with other people. Even better, meet in an area more likely to be accepting of LGBTQIA+ people. Tell a friend beforehand all the details about where you’re going, who you are meeting, and a given time that you promise to check back in with them that you’re ok.
If you’re living in one of the 69 countries where homosexuality is illegal and criminalized, make sure to check in with local advocacy groups about your area. Knowing your rights as a citizen will help keep you safe if you’re stopped by law enforcement.Privacy and Security is a Group Effort
Although the world is often hostile to non-normative expressions of love and identity, your personal security, online and off, is much better supported when you include the help of others that you trust. Keeping each other safe, accountable, and cared for gets easier when you have more people involved. A network is always stronger when every node on it is fortified against potential threats.
Happy Pride Month—keep each other safe.
The Council of Europe Cybercrime Committee's (T-CY) recent decision to approve new international rules for law enforcement access to user data without strong privacy protections is a blow for global human rights in the digital age. The final version of the draft Second Additional Protocol to the Council of Europe’s (CoE) widely adopted Budapest Cybercrime Convention, approved by the T-CY drafting committee on May 28th, places few limits on law enforcement data collection. As such, the Protocol can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone's right to privacy and free expression across the globe.
The Protocol now heads to members of CoE's Parliamentary Committee (PACE) for their opinion. PACE’s Committee on Legal Affairs and Human Rights can recommend further amendments, and decide which ones will be adopted by the Standing Committee or the Plenary. Then, the Council of Ministers will vote on whether to integrate PACE's recommendations into the final text. The CoE’s plan is to finalize the Protocol's adoption by November. If adopted, the Protocol will be open for signatures to any country that has signed the Budapest Convention sometime before 2022.
The next step for countries is at the signature stage when they will ask to reserve the right not to abide by certain provisions n the Protocol, especially Article 7 on direct cooperation between law enforcement and companies holding user data.
If countries sign the Protocol as it stands and in its entirety, it will reshape how state police access digital data from Internet companies based in other countries by prioritizing law enforcement demands, sidestepping judicial oversight, and lowering the bar for privacy safeguards.CoE’s Historical Commitment to Transparency Conspicuously Absent
While transparency and a strong commitment to engaging with external stakeholders have been hallmarks of CoE treaty development, the new Protocol’s drafting process lacked robust engagement with civil society. The T-CY adopted internal rules that have fostered a largely opaque process, led by public safety and law enforcement officials. T-CY’s periodic consultations with external stakeholders and the public have lacked important details, offered short response timelines, and failed to meaningfully address criticisms.
In 2018, nearly 100 public interest groups called on the CoE to allow for expert civil society input on the Protocol’s development. In 2019, the European Data Protection Board (EDPB) similarly called on T-CY to ensure “early and more proactive involvement of data protection authorities” in the drafting process, a call it felt the need to reiterate earlier this year. And when presenting the Protocol’s draft text for final public comment, T-CY provided only 2.5 weeks, a timeframe that the EDPB noted “does not allow for a timely and in-depth analysis” from stakeholders. That version of the Protocol also failed to include the explanatory text for the data protection safeguards, which was only published later, in the final version of May 28, without public consultation. Even other branches of the CoE, such as its data protection committee, have found it difficult to provide meaningful input under these conditions.
Last week, over 40 civil society organizations called on CoE to provide an additional opportunity to comment on the final text of the Protocol. The Protocol aims to set a new global standard across countries with widely varying commitments to privacy and human rights. Meaningful input from external stakeholders including digital rights organizations and privacy regulators is essential. Unfortunately, CoE refused and will likely vote to open the Protocol for state signatures starting in November.
With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.Eroding Global Protection for Online Anonymity
The new Protocol provides few safeguards for online anonymity, posing a threat to the safety of activists, dissidents, journalists, and the free expression rights of everyday people who go online to comment on and criticize politicians and governments. When Internet companies turn subscriber information over to law enforcement, the real-world consequences can be dire. Anonymity also plays an important role in facilitating opinion and expression online and is necessary for activists and protestors around the world. Yet the new Protocol fails to acknowledge the important privacy interests it places in jeopardy and, by ensuring most of its safeguards are optional, permits police access to sensitive personal data without systematic judicial supervision.
As a starting point, the new Protocol’s explanatory text claims that: ”subscriber information … does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” deeming it less intrusive than other categories of data.
This characterization is directly at odds with growing recognition that police frequently use subscriber data access to identify deeply private anonymous communications and activity. Indeed, the Court of Justice of the European Union (CJEU) recently held that letting states associate subscriber data with anonymous digital activity can constitute a ‘serious’ interference with privacy. The Protocol’s attempt to paint identification capabilities as non-intrusive even conflicts with CoE’s own European Court of Human Rights (ECtHR). By encoding the opposite conclusion in an international protocol, the new explanatory text can deter future courts from properly recognizing the importance of online anonymity. As the ECtHR held doing so would, “deny the necessary protection to information which might reveal a good deal about the online activity of an individual, including sensitive details of his or her interests, beliefs and intimate lifestyle.”
Articles 7 and 8 of the Protocol in particular adopt intrusive police powers while requiring few safeguards. Under Article 7, states must clear all legal obstacles to “direct cooperation” between local companies and law enforcement. Any privacy laws that prevent Internet companies from voluntarily identifying customers to foreign police without a court order are incompatible with Article 7 and must be amended. “Direct cooperation” is intended to be the primary means of accessing subscriber data, but Article 8 provides a supplementary power to force disclosure from companies that refuse to cooperate. While Article 8 does not require judicial supervision of police, countries with strong privacy protections may continue relying on their own courts when forcing a local service provider to identify customers. Both Articles 7 and 8 also allow countries to screen and refuse any subscriber data demands that might threaten a state’s essential interests. But these screening mechanisms also remain optional, and refusals are to be “strictly limited,” with the need to protect private data invoked only in “exceptional cases.”
By leaving most privacy and human rights protections to each state’s discretion, Articles 7 and 8 permit access to sensitive identification data under conditions that the ECtHR described as “offer[ing] virtually no protection from arbitrary interference … and no safeguards against abuse by State officials.”
The Protocol’s drafters have resisted calls from civil society and privacy regulators to require some form of judicial supervision in Articles 7 and 8. Some police agencies object to reliance on the courts, arguing that judicial supervision leads to slower results. But systemic involvement of the courts is a critical safeguard when access to sensitive personal data is at stake. The Office of the Privacy Commissioner of Canada put it cogently: “Independent judicial oversight may take time, but it’s indispensable in the specific context of law enforcement investigations.” Incorporating judicial supervision as a minimum threshold for cross-border access is also feasible. Indeed, a majority of states in T-CY’s own survey require prior judicial authorization for at least some forms of subscriber data in their respective national laws.
At a minimum, the new Protocol text is flawed for its failure to recognize the deeply private nature of anonymous online activity and the serious threat posed to human rights when State officials are allowed open-ended access to identification data. Granting states this access makes the world less free and seriously threatens free expression. Article 7’s emphasis on non-judicial ‘cooperation’ between police and Internet companies poses a particularly insidious risk, and must not form part of the final adopted Convention.Imposing Optional Privacy Standards
Article 14, which was recently publicized for the first time, is intended to provide detailed safeguards for personal information. Many of these protections are important, imposing limits on the treatment of sensitive data, the retention of personal data, and the use of personal data in automated decision-making, particularly in countries without data protection laws. The detailed protections are complex, and civil society groups continue to unpack their full legal impact. That being said, some shortcomings are immediately evident.
Some of Article 14’s protections actively undermine privacy—for example, paragraph 14.2.a prohibits signatories from imposing any additional “generic data protection conditions” when limiting the use of personal data. Paragraph 14.1.d also strictly limits when a country’s data protection laws can prevent law enforcement-driven personal data transfers to another country.
More generally, and in stark contrast to the Protocol’s lawful access obligations, the detailed privacy safeguards encoded in Article 14 are not mandatory and can be ignored if countries have other arrangements in place (Article 14.1). States can rely on a wide variety of agreements to bypass the Article 14 protections. The OECD is currently negotiating an agreement that might systematically displace the Article 14 protections and, under the United States Clarifying Lawful Overseas Use of Data (CLOUD) Act, the U.S. executive branch can enter into “agreements” with other states to facilitate law enforcement transfers. Paragraph 14.1.c even contemplates informal agreements that are neither binding, nor even public, meaning that countries can secretly and systematically bypass the Article 14 safeguards. No real obligations are put in place to ensure these alternative arrangements provide an adequate or even sufficient level of privacy protection. States can therefore rely on the Protocol’s law enforcement powers while using side agreements to bypass its privacy protections, a particularly troubling development given the low data protection standards of many anticipated signatories.
The Article 14 protections are also problematic because they appear to fall short of the minimum data protection that the CJEU has required. The full list of protections in Article 14, for example, resembles that inserted by the European Commission into its ‘Privacy Shield’ agreement. Internet companies relied upon the Privacy Shield to facilitate economic transfers of personal data from the European Union (EU) to the United States until the CJEU invalidated the agreement in 2020, finding its privacy protections and remedies insufficient. Similarly, clause 14.6 limits the use of personal data in purely automated decision-making systems that will have significant adverse effects on relevant individual interests. But the CJEU has also found that an international agreement for transferring air passenger data to Canada for public safety objectives was inconsistent with EU data protection guarantees despite the inclusion of a similar provision.Conclusion
These and other substantive problems with the Protocol are concerning. Cross-border data access is rapidly becoming common in even routine criminal investigations, as every aspect of our lives continues its steady migration to the digital world. Instead of baking robust human rights and privacy protections into cross-border investigations, the Protocol discourages court oversight, renders most of its safeguards optional, and generally weakens privacy and freedom of expression.
We are happy to see the news that Facebook is putting an end to a policy that has long privileged the speech of politicians over that of ordinary users. The policy change, which was announced on Friday by The Verge, is something that EFF has been pushing for since as early as 2019.
Back then, Facebook executive Nick Clegg, a former politician himself, famously pondered: "Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be."
Perhaps Clegg had a point—we’ve long said that companies are ineffective arbiters of what the world says—but that hardly justifies holding politicians to a lower standard than the average person. International standards will consider the speaker, but only as one of many factors. For example, the United Nations’ Rabat Plan of Action outlines a six-part threshold test that takes into account “(1) the social and political context, (2) status of the speaker, (3) intent to incite the audience against a target group, (4) content and form of the speech, (5) extent of its dissemination and (6) likelihood of harm, including imminence.” Facebook’s Oversight Board recently endorsed the Plan, as a framework for assessing the removal of posts that may incite hostility or violence.
Facebook has deviated very far from the Rabat standard thanks, in part, to the policy it is finally repudiating. For example, it has banned elected officials from parties disfavored by the U.S. government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government's list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obligated after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.
So this decision is a good step in the right direction. But Facebook has many steps to go, including finally—and publicly—endorsing and implementing the Santa Clara Principles.
But ultimately, the real problem is that Facebook’s policy choices have so much power in the first place. It’s worth noting that this move coincides with a massive effort to persuade the U.S. Congress to impose new regulations that are likely to entrench Facebook power over free expression in the U.S. and around the world. If users, activists and, yes, politicians want real progress in defending free expression, we must fight for a world where changes in Facebook’s community standards don’t merit headlines at all—because they just don’t matter that much.
Our friends at Access Now are once again hosting RightsCon online next week, June 7-11th. This summit provides an opportunity for human rights experts, technologists, government representatives, and activists to discuss pressing human rights challenges and their potential solutions. This year we will have several EFF staff in attendance and leading sessions throughout this five-day conference.
We hope you have an opportunity to connect with us at the following:Monday, June 7th
7:45-8:45 am PDT – Taking stock of the Facebook Oversight Board’s first year
Director of International Freedom of Expression, Jillian York
This panel will take stock of the work of the Oversight Board since the announcement of its first members in May 2020. Panelists will critically reflect on the development of the Board over its first year and consider its future evolution.
9:00-10:00 am PDT– RightsCon: Upload filters for copyright enforcement take over the world: connecting struggles in EU, US, and Latin America
Associate Director of Policy and Activism, Katharine Trendacosta
The EU copyright directive risks making upload filters mandatory for large and small platforms to prevent copyright infringements by their users. Its adoption has led to large street protests over the risk that legal expression will be curtailed in the process. Although the new EU rules will only be applied from the summer, similar proposals have already emerged in the US and Latin America, citing the EU copyright directive as a role model.
11:30-12:30pm PDT– What's past is prologue: safeguarding rights in international efforts to fight cybercrime
Note: This strategy session is limited to 25 participants.
Policy Director for Global Privacy, Katitza Rodriguez
Some efforts by governments to fight cybercrime can fail to respect international human rights law and standards, undermining people’s fundamental rights. At the UN, negotiations on a new cybercrime treaty are set to begin later this year. This would be the first global treaty on cybercrime and has the potential to significantly shape government cooperation on cybercrime and respect for rights.Wednesday, June 9th
8:30-9:30 am PDT– Contemplating content moderation in Africa: disinformation and hate speech in focus
Director of International Freedom of Expression, Jillian York.
Note: This community lab is limited to 60 participants.
While social media platforms have been applauded for being a place of self-expression, they are often inattentive to local contexts in many ethnically and linguistically diverse African countries. This panel brings together academics and civil society members to highlight and discuss the obstacles facing Africa when it comes to content moderation.Thursday, June 10th
5:30-6:30 am PDT– Designing for language accessibility: making usable technologies for non-left to right languages
Designer and Education Manager, Shirin Mori.
Note: This community lab is limited to 60 participants.
The internet was built from the ground up for ASCII characters, leaving billions of speakers of languages that do not use the Latin alphabet underserved. We’d like to introduce a number of distinct problems for non-Left-to-Right (LTR) language users that are prevalent in existing security tools and workflows.
8:00-9:00 am PDT– “But we're on the same side!”: how tools to fight extremism can harm counterspeech
Director of International Freedom of Expression, Jillian York.
Bad actors coordinate across platforms to spread content that is linked to offline violence but not deemed TVEC by platforms; centralized responses have proved prone to error and too easy to propagate errors across the Internet. Can actual dangerous speech be addressed without encouraging such dangerous centralization?
12:30-1:30 pm PDT– As AR/VR becomes a reality, it needs a human rights framework
Policy Director for Global Privacy, Katitza Rodriguez; Grassroots Advocacy Organizer, Rory Mir; Deputy Executive Director and General Counsel, Kurt Opsahl.
Note: This community lab is limited to 60 participants.
Virtual reality and augmented reality technologies (VR/AR) are rapidly becoming more prevalent to a wider audience. This technology provides the promise to entertain and educate, to connect and enhance our lives, and even to help advocate for our rights. But it also raises the risk of eroding them online.Friday, June 11th
9:15-10:15 am PDT– Must-carry? The pros and cons of requiring online services to carry all user speech: litigation strategies and outcomes
Civil Liberties Director, David Greene.
Note: This community lab is limited to 60 participants.
The legal issue of whether online services must carry user speech is a complicated one, with free speech values on both sides, and different results arising from different national legal systems. Digital rights litigators from around the world will meet to map out the legal issue across various national legal systems and discuss ongoing efforts as well as how the issue may be decided doctrinally under our various legal regimes.
In addition to these events, the EFF staff will be attending many other events at RightsCon and we look forward to meeting you there. You can view the full programing, as well as many other useful resources, on the RightsCon site.
We all know that, in the 21st century, it is difficult to lead a life without a cell phone. It is also difficult to change your number—the one all your friends, family, doctors, children’s schools, and so on—have for you. It’s especially difficult to do these things if you are trying to leave an abusive situation where your abuser is in control of your family plan and therefore has access to your phone records. Thankfully, Congress has a bill that will change that.
In August 2020, EFF joined with the Clinic to End Tech Abuse and other groups dedicated to protecting survivors of domestic violence to send a letter to Congress, calling them to pass a federal law that creates the right to leave a family mobile phone plan that they share with their abuser.
This January, Senators Brian Schatz, Deb Fischer, Richard Blumenthal, Rick Scott, and Jacky Rosen responded to the letter by introducing The Safe Connections Act (S. 120), which would make it easier for survivors to separate their phone line from a family plan while keeping their own phone number. It would also require the FCC to create rules to protect the privacy of the people seeking this protection. EFF is supportive of this bill.
The bill got bipartisan support and passed unanimously out of the U.S. Senate Committee on Commerce, Science, & Transportation on April 28, 2021. While there is still a long way to go, EFF is pleased to see this bill get past the first critical step. There is little reason that telecommunications carriers, who are already required to make numbers portable when users want to change carriers, cannot replicate such a seamless process when a paying customer wants to move an account within the same carrier.
Our cell phones contain a vast amount of information about our lives, including the calls and texts we make and receive. The account holder of a family plan has access to all of that data, including if someone in the plan is calling a domestic violence hotline. Giving survivors more tools to protect their privacy, leave abusive situations, and get on with their lives are worthy endeavors. The Safe Connections Act provides a framework to serve these ends.
We would prefer a bill that did not require survivors to provide paperwork to “prove” their abuse—for many survivors, providing paperwork about their abuse from a third party is burdensome and traumatic, especially when it is required at the very moment when they are trying to free themselves from their abusers. It also requires the FCC to create new regulations to protect the privacy of people seeking help to leave abusive situations though still needs stronger safeguards and remedies to ensure these protections are effective. EFF will continue to advocate for these improvements as the legislation moves forward.
Van Buren is a Victory Against Overbroad Interpretations of the CFAA, and Protects Security Researchers
The Supreme Court’s Van Buren decision today overturned a dangerous precedent and clarified the notoriously ambiguous meaning of “exceeding authorized access” in the Computer Fraud and Abuse Act, the federal computer crime law that’s been misused to prosecute beneficial and important online activity.
The decision is a victory for all Internet users, as it affirmed that online services cannot use the CFAA’s criminal provisions to enforce limitations on how or why you use their service, including for purposes such as collecting evidence of discrimination or identifying security vulnerabilities. It also rejected the use of troubling physical-world analogies and legal theories to interpret the law, which in the past have resulted in some of its most dangerous abuses.
The Van Buren decision is especially good news for security researchers, whose work discovering security vulnerabilities is vital to the public interest but often requires accessing computers in ways that contravene terms of service. Under the Department of Justice’s reading of the law, the CFAA allowed criminal charges against individuals for any website terms of service violation. But a majority of the Supreme Court rejected the DOJ’s interpretation. And although the high court did not narrow the CFAA as much as EFF would have liked, leaving open the question of whether the law requires circumvention of a technological access barrier, it provided good language that should help protect researchers, investigative journalists, and others.
The CFAA makes it a crime to “intentionally access a computer without authorization or exceed authorized access, and thereby obtain . . . information from any protected computer,” but does not define what authorization means for purposes of exceeding authorized access. In Van Buren, a former Georgia police officer was accused of taking money in exchange for looking up a license plate in a law enforcement database. This was a database he was otherwise entitled to access, and Van Buren was charged with exceeding authorized access under the CFAA. The Eleventh Circuit analysis had turned on the computer owner’s unilateral policies regarding use of its networks, allowing private parties to make EULA, TOS, or other use policies criminally enforceable.
The Supreme Court rightly overturned the Eleventh Circuit, and held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Rather, the statute’s prohibition is limited to someone who “accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.” The Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. If you need to break through a digital gate to get in, entry is a crime, but if you are allowed through an open gateway, it’s not a crime to be inside.
This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA. For example, if you can look at housing ads as a user, it is not a hacking crime to pull them for your bias-in-housing research project, even if the TOS forbids it. Van Buren is really good news for port scanning, for example: so long as the computer is open to the public, you don’t have to worry about the conditions for use to scan the port.
While the decision was centered around the interpretation of the statute’s text, the Court bolstered its conclusion with the policy concerns raised by the amici, including a brief EFF filed on behalf of computer security researchers and organizations that employ and support them. The Court’s explanation is worth quoting in depth:
If the “exceeds authorized access” clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals. Take the workplace. Employers commonly state that computers and electronic devices can be used only for business purposes. So on the Government’s reading of the statute, an employee who sends a personal e-mail or reads the news using her work computer has violated the CFAA. Or consider the Internet. Many websites, services, and databases …. authorize a user’s access only upon his agreement to follow specified terms of service. If the “exceeds authorized access” clause encompasses violations of circumstance-based access restrictions on employers’ computers, it is difficult to see why it would not also encompass violations of such restrictions on website providers’ computers. And indeed, numerous amici explain why the Government’s reading [would] criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook.
This analysis shows the Court recognized the tremendous danger of an overly broad CFAA, and explicitly rejected the Government’s arguments for retaining wide powers, tempered only by their prosecutorial discretion.Left Unresolved: Whether CFAA Violations Require Technical Access Limitations
The Court’s decision was limited in one important respect. In a footnote, the Court left as an open question if the enforceable access restriction meant only “technological (or ‘code-based’) limitations on access, or instead also looks to limits contained in contracts or policies,” meaning that the opinion neither adopted nor rejected either path. EFF has argued in courts and legislative reform efforts for many years that it’s not a computer hacking crime without hacking through a technological defense.
This footnote is a bit odd, as the bulk of the majority opinion seems to point toward the law requiring someone to defeat technological limitations on access, and throwing shade at criminalizing TOS violations. In most cases, the scope of your access once on a computer is defined by technology, such as an access control list or a requirement to reenter a password. Professor Orin Kerr suggested that this may have been a necessary limitation to build the six justice majority.
Later in the Van Buren opinion, the Court rejected a Government argument that a rule against “using a confidential database for a non-law-enforcement purpose” should be treated as a criminally enforceable access restriction, different from “using information from the database for a non-law-enforcement purpose” (emphasis original). This makes sense under the “gates-up-or-down” approach adopted by the Court. Together with the policy issues the Court acknowledged regarding enforcing terms of service quoted above, this helps us understand the limitation footnote, suggesting cleverly writing a TOS will not easily turn a conditional rule on why you can access, or what you can do with information later, into a criminally enforceable access restriction.
Nevertheless, leaving the question open means that we will have to litigate whether and under what circumstance a contract or written policy can amount to an access restriction in the years to come. For example, in Facebook v. Power Ventures, the Ninth Circuit found that a cease and desist letter removing authorization was sufficient to create a CFAA violation for later access, even though a violation of the Facebook terms alone was not. Service providers will likely argue that this is the sort of non-technical access restriction that was left unresolved by Van Buren.Court’s Narrow CFAA Interpretation Should Help Security Researchers
Even though the majority opinion left this important CFAA question unresolved, the decision still offers plenty of language that will be helpful for later cases on the scope of the statute. That’s because the Van Buren majority’s focus on the CFAA’s technical definitions, and the types of computer access that the law restricts, should provide guidance to lower courts that narrow the law’s reach.
This is a win because broad CFAA interpretations have in the past often deterred or chilled important security research and investigative journalism. The CFAA put these activities in legal jeopardy, in part, because courts often struggle with using non-digital legal concepts and physical analogies to interpret the statute. Indeed, one of the principle disagreements between the Van Buren majority and dissent is whether the CFAA should be interpreted based on physical property law doctrines, such as trespass and theft.
The majority opinion ruled that, in principle, computer access is different from the physical world precisely because the CFAA contains so many technical terms and definitions. “When interpreting statutes, courts take note of terms that carry ‘technical meaning[s],’” the majority wrote.
The rule is particularly true for the CFAA because it focuses on malicious computer use and intrusions, the majority wrote. For example, the term “access” in the context of computer use has its own specific, well established meaning: “In the computing context, ‘access’ references the act of entering a computer ‘system itself’ or a particular ‘part of a computer system,’ such as files, folders, or databases.” Based on that definition, the CFAA’s “exceeding authorized access” restriction should be limited to prohibiting “the act of entering a part of the system to which a computer user lacks access privileges.”
The majority also recognized that the portions of the CFAA that define damage and loss are premised on harm to computer files and data, rather than general non-digital harm such as trespassing on another person’s property: “The statutory definitions of ‘damage’ and ‘loss’ thus focus on technological harms—such as the corruption of files—of the type unauthorized users cause to computer systems and data,” the Court wrote. This is important because loss and damage are prerequisites to civil CFAA claims, and the ability of private entities to enforce the CFAA has been a threat that deters security research when companies might rather their vulnerabilities remain unknown to the public.
Because the CFAA’s definitions of loss and damages focus on harm to computer files, systems, or data, the majority wrote that they “are ill fitted, however, to remediating ‘misuse’ of sensitive information that employees may permissibly access using their computers.”
The Supreme Court’s Van Buren decision rightly limits the CFAA’s prohibition on “exceeding authorized access” to prohibiting someone from accessing particular computer files, services, or other parts of the computer that are otherwise off-limits to them. And the Court’s overturning the Eleventh Circuit decision that permitted CFAA liability based on someone violating a website’s terms of service or an employers’ computer use restrictions ensures that lots of important, legitimate computer use is not a crime.
But there is still more work to be done to ensure that computer crime laws are not misused against researchers, journalists, activists, and everyday internet users. As longtime advocates against overbroad interpretations of the CFAA, EFF will continue to lead efforts to push courts and lawmakers to further narrow the CFAA and similar state computer crime laws so they can no longer be misused.Related Cases: Van Buren v. United States
It’s hard to believe that when Governor Newsom identifies a total of $7 billion for California’s legislature to spend on broadband access—coming from a mix of state surplus dollars and federal rescue money to invest in broadband infrastructure—that the legislature would do nothing.
Tell Your Lawmakers to Support the Governor's Broadband Plan
It is hard to believe that, when handed an amount that would finance giving every single Californian a fiber connection to the Internet over the next five years; would allow the state to address an urgent broadband crisis worsened by the pandemic; and gives us a way to start ending the digital divide now, that the legislature would rather waste time we can’t afford to think it over.
But that is exactly what California’s legislature has proposed this week. Can you believe it?
Tucked away on page 12 of this 153-page budget document from the legislature this week is the following plan for Governor Newsom’s proposal to help connect every Californian to 21st-century access:
Broadband. Appropriates $7 billion over a multi-year period for broadband infrastructure and improved access to broadband services throughout the state. Details will continue to be worked out through three party negotiations. Administrative flexibilities will enable the appropriated funds to be accelerated to ensure they are available as needed to fund the expansion and improvements.
What this says is that the legislature wants to approve $7 billion for broadband infrastructure but does not want to authorize the governor to carry out his proposal any time soon.
There’s no excuse for this. Lawmakers have been given a lot of detail on this proposal, and ask anyone in the public and they would say we need action right now. This cannot be what passes in Sacramento next week as part of establishing California’s budget. At the very least, the legislature needs to give the Governor the clear authority to begin the planning process of deploying public fiber infrastructure to all Californians. This is a long process, which requires feasibility studies, environmental assessments, contracting with construction crews, and setting up purchases of materials. All of this takes months of time to process before any construction can even start and delaying even this first basic step pushes back the date we end the digital divide in California.Wasting Time Risks Federal Money and Will Perpetuate the Digital Divide
Federal rescue dollars must be spent quickly, or they will be rescinded back to the federal government. Those are explicit rules from Congress and the Biden Administration as part of the rescue funds that were issued these last few months. Right now, there is a global rush for fiber broadband deployment that is putting a lot of pressure on manufacturers and workforce that build fiber-optic wires. In other words, more and more of the world is catching on to what EFF stated years ago, which is 21st century broadband access is built on fiber optics. Each day California sits out deploying this infrastructure puts us further behind the queue in demand and further delays actual construction.
Therefore, if Sacramento does not immediately authorize at least the planning phase of building out a statewide middle mile open-access fiber network—along with empowering local governments, non-profits, and cooperatives to draft their own fiber plans to deploy last mile connectivity—then we risk losing that valuable federal money. The state has a real opportunity, but only if it acts now, not months from now. California even has a chance to jump the line ahead of the rest of the country as Congress continues to debate about its own broadband infrastructure plan.
For the state that has made famous the little girls doing homework in fast-food parking lots because they lacked affordable robust internet access at home, it is irresponsible to look at $7 billion and not start the process to solve the problem. That’s exactly what will happen if the California legislature doesn’t hear from you.
Call your Assemblymember and Senator now to demand they approve Governor Newsom’s broadband plan next week to fix the digital divide now. This is the time to act on ending the digital divide, not continue talking about it.
TELL YOUR LAWMAKERS TO SUPPORT THE GOVERNOR'S BROADBAND PLAN
This blog post is part of a series, looking at the public interest internet—the parts of the internet that don’t garner the headlines of Facebook or Google, but quietly provide public goods and useful services without requiring the scale or the business practices of the tech giants. Read our first two parts or our introduction.
Last time, we saw how much of the early internet’s content was created by its users—and subsequently purchased by tech companies. By capturing and monopolizing this early data, these companies were able to monetize and scale this work faster than the network of volunteers that first created it for use by everybody. It’s a pattern that has happened many times in the network’s history: call it the enclosure of the digital commons. Despite this familiar story, the older public interest internet has continued to survive side-by-side with the tech giants it spawned: unlikely and unwilling to pull in the big investment dollars that could lead to accelerated growth, but also tough enough to persist in its own ecosystem. Some of these projects you’ve heard of—Wikipedia, or the GNU free software project, for instance. Some, because they fill smaller niches and aren’t visible to the average Internet user, are less well-known. The public interest internet fills the spaces between tech giants like dark matter; invisibly holding the whole digital universe together.
Sometimes, the story of a project’s switch to the commercial model is better known than its continuing existence in the public interest space. The notorious example in our third post was the commercialization of the publicly-built CD Database (CDDB): when a commercial offshoot of this free, user-built database, Gracenote, locked down access, forks like freedb and gnudb continued to offer the service free to its audience of participating CD users.
Gracenote’s co-founder, Steve Scherf, claimed that without commercial investment, CDDB’s free alternatives were doomed to “stagnation”. While alternatives like gnudb have survived, it’s hard to argue that either freedb or gnudb have innovated beyond their original goal of providing and collecting CD track listings. Then again, that’s exactly what they set out to do, and they’ve done it admirably for decades since.
But can innovation and growth take place within the public interest internet? CDDB’s commercialization parlayed its initial market into a variety of other music-based offerings. Their development of these products led to them being purchased, at various points, by AV manufacturer Escient, Sony, Tribune Media, and most recently, Nielsen. Each sale made money for its investors. Can a free alternative likewise build on its beginnings, instead of just preserving them for its original users?MusicBrainz, a Community-Driven Alternative to Gracenote
Among the CDDB users who were thrown by its switch to a closed system in the 1990s, was Robert Kaye. Kaye was a music lover and, at the time, a coder working on one of the earliest MP3 encoders and players at Xing. Now he and a small staff work full-time on MusicBrainz, a community-driven alternative to Gracenote. (Disclosure: EFF special advisor Cory Doctorow is on the board of MetaBrainz, the non-profit that oversees MusicBrainz).
“We were using CDDB in our service,” he told me from his home in Barcelona. “Then one day, we received a notice that said you guys need to show our [Escient, CDDB’s first commercial owner] logo when a CD is looked up. This immediately screwed over blind users who were using a text interface of another open source CD player that couldn’t comply with the requirement. And it pissed me off because I’d typed in a hundred or so CDs into that database… so that was my impetus to start the CD index, which was the precursor to MusicBrainz.”
Over two decades after the user rebellion that created it, MusicBrainz continues to tick along
MusicBrainz has continued ever since to offer a CDDB-compatible CD metadata database, free for anyone to use. The bulk of its user-contributed data has been put into the public domain, and supplementary data—such as extra tags added by volunteers—is provided under a non-commercial, attribution license.
Over time, MusicBrainz has expanded by creating other publicly available, free-to-use databases of music data, often as a fallback for when other projects commercialize and lock down. For instance, Audioscrobbler was an independent system that collected information on what music you’ve listened to (no matter on what platform you heard it), to learn and provide recommendations based on its users’ contributions, but under your control. It was merged into Last.fm, an early Spotify-like streaming service, which was then sold to CBS. When CBS seemed to be neglecting the “scrobbling” community, MusicBrainz created ListenBrainz, which re-implemented features that had been lost over time. The plan, says Kaye, is to create a similarly independent recommendation system.
While the new giants of Internet music—Spotify, Apple Music, Amazon—have been building closed machine-learning models to data-mine their users, and their musical interests, MusicBrainz has been working in the open with Barcelona's Pompeu Fabra University to derive new metadata from the MusicBrainz communities’ contributions. Automatic deductions of genre, mood, beats-per-minute and other information are added to the AcousticBrainz database for everyone to use. These algorithms learn from their contributors’ corrections, and the fixes they provide are added to the commonwealth of public data for everyone to benefit from.
MusicBrainz’ aspirations sound in synchrony with the early hopes of the Internet, and after twenty years, they appear to have proven the Internet can support and expand a long-term public good, as opposed to a proprietary, venture capital-driven growth model. But what’s to stop the organization from going the same way as those other projects with their lofty goals? Kaye works full-time on MusicBrainz along with eight other employees: what’s to say that they’re not exclusively profiteering from the wider unpaid community in the same way as larger companies like Google benefit from their users’ contributions?
MusicBrainz has some good old-fashioned pre-Internet institutional protections. It is managed as a 501(c) non-profit, the MetaBrainz Foundation, which places some theoretical constraints on how it might be bought out. Another old Internet value is radical transparency, and the organization has that in spades. All of its financial transactions, from profit and loss sheets to employment costs, to its server outlay and board meeting notes are published online.
Another factor, says Kaye, is keeping a clear delineation between the work done by MusicBrainz’s paid staff and the work of the MusicBrainz volunteer community. “My team should work on the things that aren’t fun to work on. The volunteers work on the fun things,” he says. When you're running a large web service built on the contributions of a community, there’s no end of volunteers for interesting projects, but, as Kaye notes, “there's an awful lot of things that are simply not fun, right? Our team is focused on doing these things.” It helps that MetaBrainz, the foundation, hires almost exclusively from long-term MusicBrainz community members.
Perhaps MusicBrainz’s biggest defense against its own decline is the software (and data) licenses it uses for its databases and services. In the event of the organizations’ separation from the desires of its community, all its composition and output—its digital assets, the institutional history—are laid out so that the community can clone its structure, and create another, near-identical, institution closer to its needs. The code is open source; the data is free to use; the radical transparency of the financial structures means that the organization itself can be reconstructed from scratch if need be.
Such forks are painful. Anyone who has recently watched the volunteer staff and community of Freenode, the distributed Internet Relay Chat (IRC) network, part ways with the network’s owner and start again at Libera.chat, will have seen this. Forks can be divisive in a community, and can be reputationally devastating to those who are abandoned by the community they claimed to lead and represent. MusicBrainz staff’s livelihood depends on its users in a way that even the most commercially sensitive corporation does not.
It’s unlikely that a company would place its future viability so directly in the hands of its users. But it’s this self-imposed sword of Damocles hanging over Rob Kaye and his staff’s heads that fuels the communities’ trust in their intentions.Where Does the Money Come From?
Open licenses, however, can also make it harder for projects to gather funding to persist. Where does MusicBrainz' money come from? If anyone can use their database for free, why don’t all their potential revenue sources do just that, free-riding off the community without ever paying back? Why doesn’t a commercial company reproduce what MusicBrainz does, using the same resources that a community would use to fork the project?
MusicBrainz’s open finances show that, despite those generous licenses, they’re doing fine. The project’s transparency lets us see that it brought in around $400K in revenue in 2020, and had $400K in costs (it experienced a slight loss, but other years have been profitable enough to make this a minor blip). The revenue comes as a combination of small donors and larger sponsors, including giants like Google, who use MusicBrainz’ data and pay for a support contract.
Given that those sponsors could free-ride, how does Kaye get them to pay? He has some unorthodox strategies (most famously, sending a cake to Amazon to get them to honor a three-year-old invoice), but the most common reason seems to be that an open database maintainer that is responsive to a wider community is also easier for commercial concerns to interface with, both technically and contractually. Technologists building out a music tool or service turn to MusicBrainz for the same reason as they might pick an open source project: it’s just easier to slot it into their system without having to jump through authentication hoops or begin negotiations with a sales team. Then, when a company forms around that initial hack, its executives eventually realize that they now have a real dependency on a project with whom they have no contractual or financial relationship. A support contract means that they have someone to call up if it goes down; a financial relationship means that it’s less likely to disappear tomorrow.
If Sony had used MusicBrainz’ data, they would have been able to carry on regardless
Again, commercial alternatives may make the same offer, but while a public interest non-profit like MusicBrainz might vanish if it fails its community, or simply runs out of money, those other private companies may well have other reasons to exit their commitments with their customers. When Sony bought Gracenote, it was presumably partly so that they could support their products that used Gracenote’s databases. After Sony sold Gracenote, they ended up terminating their own use of the databases. Sony announced to their valued customers in 2019 that Sony Blu-Ray and Home Theater products would no longer have CD and DVD recognition features. The same thing happened to Sony’s mobile Music app in 2020, which stopped being able to recognize CDs when it was cut off from Gracenote’s service. We can have no insight into these closed, commercial deals, but we can presume that Sony and Gracenote’s new owner could not come to an amicable agreement.
By contrast, if Sony had used MusicBrainz’ data, they would have been able to carry on regardless. They’d be assured that no competitor would buy out MusicBrainz from under them, or lock their products out of an advertised feature. And even if MusicBrainz the non-profit died, there would be a much better chance that an API-compatible alternative would spring up from the ashes. If it was that important, Sony could have supported the community directly. As it is, Sony paid $260 million for Gracenote. For their CD services, at least, they could have had a more stable service deal with MusicBrainz for $1500 a month.
Over two decades after the user rebellion that created it, MusicBrainz continues to tick along. Its staff is drawn from music fans around the world, and meets up every year with a conference paid for by the MusicBrainz Foundation. Its contributors know that they can always depend on its data staying free; its paying customers know that they can always depend on its data being usable in their products. MusicBrainz staff can be assured that they won’t be bought up by big tech, and they can see the budget that they have to work with.
It’s not perfect. A transparent non-profit that aspires to internet values can be as flawed as any other. MusicBrainz suffered a reputational hit last year when personal data leaked from its website, for instance. But by continuing to exist, even with such mistakes, and despite multiple economic downturns, it demonstrates that a non-profit, dedicated to the public interest, can thrive without stagnating, or selling its users out.
But, but, but. While it’s good to know public interest services are successful in niche territories like music recognition, what about the parts of the digital world that really seem to need a more democratic, decentralized alternative—and yet notoriously lack them? Sites like Facebook, Twitter, and Google have not only built their empires from others’ data, they have locked their customers in, apparently with no escape. Could an alternative, public interest social network be possible? And what would that look like?
We'll cover these in a later part of our series. (For a sneak preview, check out the recorded discussions at “Reimagining the Internet”, from our friends at the Knight First Amendment Institute at Columbia University and the Initiative on Digital Public Infrastructure at the University of Massachusetts, Amherst, which explore in-depth many of the topics we’ve discussed here.)
This post was co-written by EFF Legal Intern Lara Ellenberg
In going after internet service providers (ISPs) for the actions of just a few of their users, Sony Music, other major record labels, and music publishing companies have found a way to cut people off of the internet based on mere accusations of copyright infringement. When these music companies sued Cox Communications, an ISP, the court got the law wrong. It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital internet access as ISPs start to cut off more and more customers to avoid massive damages.
EFF, together with the Center for Democracy & Technology, the American Library Association, the Association of College and Research Libraries, the Association of Research Libraries, and Public Knowledge filed an amicus brief this week urging the U.S. Court of Appeals for the Fourth Circuit to protect internet subscribers’ access to essential internet services by overturning the district court’s decision.
The district court agreed with Sony that Cox is responsible when its subscribers—home and business internet users—infringe the copyright in music recordings by sharing them on peer-to-peer networks. It effectively found that Cox didn’t terminate accounts of supposedly infringing subscribers aggressively enough. An earlier lawsuit found that Cox wasn’t protected by the Digital Millennium Copyright Act’s (DMCA) safe harbor provisions that protect certain internet intermediaries, including ISPs, if they comply with the DMCA’s requirements. One of those requirements is implementing a policy of terminating “subscribers and account holders … who are repeat infringers” in “appropriate circumstances.” The court ruled in that earlier case that Cox didn’t terminate enough customers who had been accused of infringement by the music companies.
In this case, the same court found that Cox was on the hook for the copyright infringement of its customers and upheld the jury verdict of $1 billion in damages—by far the largest amount ever awarded in a copyright case.The District Court Got the Law Wrong
When an ISP isn’t protected by the DMCA’s safe harbor provision, it can sometimes be held responsible for copyright infringement by its users under “secondary liability” doctrines. The district court found Cox liable under both varieties of secondary liability—contributory infringement and vicarious liability—but misapplied both of them, with potentially disastrous consequences.
An ISP can be contributorily liable if it knew that a customer infringed on someone else’s copyright but didn’t take “simple measures” available to it to stop further infringement. Judge O’Grady’s jury instructions wrongly implied that because Cox didn’t terminate infringing users’ accounts, it failed to take “simple measures.” But the law doesn’t require ISPs to terminate accounts to avoid liability. The district court improperly imported a termination requirement from the DMCA’s safe harbor provision (which was already knocked out earlier in the case). In fact, the steps Cox took short of termination actually stopped most copyright infringement—a fact the district court simply ignored.
The district court also got it wrong on vicarious liability. Vicarious liability comes from the common law of agency. It holds that people who are a step removed from copyright infringement (the “principal,” for example, a flea market operator) can be held liable for the copyright infringement of its “agent” (for example, someone who sells bootleg DVDs at that flea market), when the principal had the “right and ability to supervise” the agent. In this case, the court decided that because Cox could terminate accounts accused of copyright infringement, it had the ability to supervise those accounts. But that’s not how other courts have ruled. For example, the Ninth Circuit decided in 2019 that Zillow was not responsible when some of its users uploaded copyrighted photos to real estate listings, even though Zillow could have terminated those users’ accounts. In reality, ISPs don’t supervise the Internet activity of their users. That would require a level of surveillance and control that users won’t tolerate, and that EFF fights against every day.
The consequence of getting the law wrong on secondary liability here, combined with the $1 billion damage award, is that ISPs will terminate accounts more frequently to avoid massive damages, and cut many more people off from the internet than is necessary to actually address copyright infringement.The District Court’s Decision Violates Due Process and Harms All Internet Users
Not only did the decision get the law on secondary liability wrong, it also offends basic ideas of due process. In a different context, the Supreme Court decided that civil damages can violate the Constitution’s due process requirement when the amount is excessive, especially when it fails to consider the public interests at stake. In the case against Cox, the district court ignored both the fact that a $1 billion damages award is excessive, and that its decision will cause ISPs to terminate accounts more readily and, in the process, cut off many more people from the internet than necessary.
Having robust internet access is an important public interest, but when ISPs start over-enforcing to avoid having to pay billion-dollar damages awards, that access is threatened. Millions of internet users rely on shared accounts, for example at home, in libraries, or at work. If ISPs begin to terminate accounts more aggressively, the impact will be felt disproportionately by the many users who have done nothing wrong but only happen to be using the same internet connection as someone who was flagged for copyright infringement.
More than a year after the start of the COVID-19 pandemic, it's more obvious than ever that internet access is essential for work, education, social activities, healthcare, and much more. If the district court’s decision isn’t overturned, many more people will lose access in a time when no one can afford not to use the internet. That harm will be especially felt by people of color, poorer people, women, and those living in rural areas—all of whom rely disproportionately on shared or public internet accounts. And since millions of Americans have access to just a single broadband provider, losing access to a (shared) internet account essentially means losing internet access altogether. This loss of broadband access because of stepped-up termination will also worsen the racial and economic digital divide. This is not just unfair to internet users who have done nothing wrong, but also overly harsh in the case of most copyright infringers. Being effectively cut off from society when an ISP terminates your account is excessive, given the actual costs of non-commercial copyright infringement to large corporations like Sony Music.
It's clear that Judge O’Grady misunderstood the impact of losing Internet access. In a hearing on Cox’s earlier infringement case in 2015, he called concerns about losing access “completely hysterical,” and compared them to “my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework.” Of course, this wasn’t a valid comparison in 2015 and it rightly sounds absurd today. That’s why, as the case comes before the Fourth Circuit, we’re asking the court to get the law right and center the importance of preserving internet access in its decision.
Supreme Court Overturns Overbroad Interpretation of CFAA, Protecting Security Researchers and Everyday Users
EFF has long fought to reform vague, dangerous computer crime laws like the CFAA. We're gratified that the Supreme Court today acknowledged that overbroad application of the CFAA risks turning nearly any user of the Internet into a criminal based on arbitrary terms of service. We remember the tragic and unjust results of the CFAA's misuse, such as the death of Aaron Swartz, and we will continue to fight to ensure that computer crime laws no longer chill security research, journalism, and other novel and interoperable uses of technology that ultimately benefit all of us.
EFF filed briefs both encouraging the Court to take today's case and urging it to make clear that violating terms of service is not a crime under the CFAA. In the first, filed alongside the Center for Democracy and Technology and New America’s Open Technology Institute, we argued that Congress intended to outlaw computer break-ins that disrupted or destroyed computer functionality, not anything that the service provider simply didn’t want to have happen. In the second, filed on behalf of computer security researchers and organizations that employ and support them, we explained that the broad interpretation of the CFAA puts computer security researchers at legal risk for engaging in socially beneficial security testing through standard security research practices, such as accessing publicly available data in a manner beneficial to the public yet prohibited by the owner of the data.
Today's win is an important victory for users everywhere. The Court rightly held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Thus, “an individual ‘exceeds authorized access’ when he accesses a computer with authorization but then obtains information located in particular areas of the computer— such as files, folders, or databases—that are off limits to him.” Rejecting the Government’s reading allowing CFAA charges for any website terms of service violation, the Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA.Related Cases: Van Buren v. United States
The EU copyright directive has caused controversy than any other proposal in recent EU history - and for good reason. In abandoning traditional legal mechanisms to tackle copyright infringement online, Article 17 (formerly Article 13) of the directive introduced a new liability regime for online platforms, supposedly in order to support creative industries, that will have disastrous consequences for users. In a nutshell: To avoid being held responsible for illegal content on their services, online platforms must act as copyright cops, bending over backwards to ensure infringing content is not available on their platforms. As a practical matter (as EFF and other user rights advocates have repeatedly explained) this means Article 17 is a filtering mandate.
But all was not lost - the EU Commission had an opportunity to stand up for users and independent creators by mitigating Article 17's threat to free expression. Unfortunately, it has chosen instead to stand up for a small but powerful group of copyright maximalists.
EU "Directives" are not automatically applicable laws. Once a directive is passed, EU member states must “transpose” them into national law. These transpositions are now the center of the fight against copyright upload filters. In several meetings of an EU Commission's Stakeholder Dialogue and through consultations developing guidelines for the application of Article 17 (which must be implemented in national laws by June 7, 2021) EFF and other civil society groups stressed that users' rights to free speech are not negotiable and must apply when they upload content, not during a later complaint stage.
The first draft of the guidance document seemed to recognize those concerns and prioritize user rights. But the final result, issued today, is disappointing. On the plus side, the EU Commission stresses that Article 17 does not mandate the use of specific technology to demonstrate "best efforts" to ensure users don't improperly upload copyright-protected content on platforms. However, the guidance document failed to state clearly that mandated upload filters undermine the fundamental rights protection of users. The EU Commission differentiates "manifestly" infringing uploads from other user uploads, but stresses the importance of rightsholders' blocking instructions, and the need to ensure they do not suffer "economic harm." And rather than focusing on how to ensure legitimate uses such as quotations or parodies, the Commission advises that platforms must give heightened attention to "earmarked" content. As a practical matter, that "heightened attention" is likely to require using filters to prevent users from uploading such content.
We appreciate that digital rights organizations had a seat at the stakeholder dialogue-table, even though outnumbered by rightsholders from the music and film industries and representatives of big tech companies. And the guidance document contains a number of EFF suggestions for implementation, such as to clarify that specific technological solutions are not mandated, to ensure that smaller platforms have a lower standard of "best efforts", and to respect data protection law when interpreting Article 17. However, on the most crucial element - the risk of over-blocking of legitimate user content - the Commission simply describes "current market practices," including the use of content recognition technologies that inevitably over-block. Once again, user rights and exceptions take a backseat.
This battle to protect freedom of expression is far from over. Guidance documents are non-binding and the EU Court of Justice will have the last say on whether Article 17 will lead to censorship and limit freedom of expression rights. Until then, national governments do not have a discretion to transpose the requirements under Article 17 as they see fit, but an obligation to use the legislative leeway available to implement them in line with fundamental rights.
Larry Brandt, a long-time supporter of internet freedom, used his nearly 20-year-old PayPal account to put his money where his mouth is. His primary use of the payment system was to fund servers to run Tor nodes, routing internet traffic in order to safeguard privacy and avoid country-level censorship. Now Brandt’s PayPal account has been shut down, leaving many questions unanswered and showing how financial censorship can hurt the cause of internet freedom around the world.
Brandt first discovered his PayPal account was restricted in March of 2021. Brandt reported to EFF: “I tried to make a payment to the hosting company for my server lease in Finland. My account wouldn't work. I went to my PayPal info page which displayed a large vertical banner announcing my permanent ban. They didn't attempt to inform me via email or phone—just the banner.”
Brandt was unable to get the issue resolved directly through PayPal, and so he then reached out to EFF.
For years, EFF has been documenting instances of financial censorship, in which payment intermediaries and financial institutions shutter accounts and refuse to process payments for people and organizations that haven’t been charged with any crime. Brandt shared months of PayPal transactions with the EFF legal team, and we reviewed his transactions in depth. We found no evidence of wrongdoing that would warrant shutting down his account, and we communicated our concerns to PayPal. Given that the overwhelming majority of transactions on Brandt’s account were payments for servers running Tor nodes, EFF is deeply concerned that Brandt’s account was targeted for shut down specifically as a result of his activities supporting Tor.
We reached out to PayPal for clarification, to urge them to reinstate Brandt’s account, and to educate them about Tor and its value in promoting freedom and privacy globally. PayPal denied that the shutdown was related to the concerns about Tor, claiming only that “the situation has been determined appropriately” and refusing to offer a specific explanation. After several weeks, PayPal has still refused to reinstate Brandt’s account.
The Tor Project echoed our concerns, saying in an email: “This is the first time we have heard about financial persecution for defending internet freedom in the Tor community. We're very concerned about PayPal’s lack of transparency, and we urge them to reinstate this user’s account. Running relays for the Tor network is a daily activity for thousands of volunteers and relay associations around the world. Without them, there is no Tor—and without Tor, millions of users would not have access to the uncensored internet.”
One of the particularly concerning elements of Brandt’s situation is how automated his account shut down was. After his PayPal account was shuttered, Brandt attempted to reach out to PayPal directly. As he explained to EFF: “I tried to contact them many times by email and phone. PayPal never responded to either. They have an online 'Resolution Center' but I never had a dialog with anyone there either.” The PayPal terms reference the Resolution Center as an option, but asserts PayPal has no obligation to disclose details to its users.
Internet companies just aren’t incentivized to care about customer service.
Many online service providers make it difficult or impossible for users to reach a human to resolve a problem with their services. That’s because employing people to resolve these issues often costs more than the small amounts they save by reinstating wrongfully banned accounts. Internet companies just aren’t incentivized to care about customer service. But while it may serve companies’ bottom lines to automate account shut downs and avoid human interaction, the experience for individual users is deeply frustrating.
EFF, along with the ACLU of Northern California, New America’s Open Technology Institute, and the Center for Democracy and Technology have endorsed the Santa Clara principles, which attempt to guide companies in centering human rights in their decisions to ban users or take down content. In particular, the third principle is that “Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.” Our advocacy has already pressured companies like Facebook, Twitter, and YouTube to endorse the Santa Clara principles—but so far, PayPal has not. Brandt’s account was shut down without notice, he was given no opportunity to appeal, and he was given no clarity on what actions resulted in his account being shut down, nor whether this was in relation to a violation of PayPal’s terms – and, if so, which part of those terms.
We are concerned about situations such as Brandt’s not only because of the harm and inconvenience caused to one user, but because of the societal harms from patterns of account closures. When a handful of online payment services can dictate who has access to financial services, they can also determine which people and which services get to exist in our increasingly digital world. While tech giants like Google and Facebook have come under fire for their content moderation practices and wrongfully banning accounts, financial services haven’t gotten the same level of scrutiny.
But if anything, financial intermediaries should be getting the most scrutiny. Access to financial services directly impacts one’s ability to survive and thrive in modern society, and is the only way that most websites can process payments. We’ve seen the havoc that financial censorship can wreak on online booksellers, music sharing sites, and the whistleblower website Wikileaks. PayPal has already made newsworthy mistakes, like automatically freezing accounts that have transactions that mention words such as “Syria.” In that case, PayPal temporarily froze the account of News Media Canada over an article about Syrian refugees that was entered into their annual awards competition.
EFF is calling on PayPal to do better by its customers, and that starts by embracing the Santa Clara principles. Specifically, we are calling on them to:
- Publish a transparency report. A transparency report will indicate how many accounts PayPal is shutting down in response to government requests, and we’d urge them to additionally indicate how many accounts they shut down for other reasons, including terms of service violations, as well as how many Suspicious Activity Reports they file. Other online financial services, including most recently Coinbase, have already begun publishing transparency reports, and there’s no reason PayPal can’t do the same
- Provide meaningful notice to users. If PayPal is choosing to shut down someone’s account, they should provide detailed guidance about what aspect of PayPal’s terms were violated or why the account was shut down, unless forbidden from doing so by a legal prohibition or in cases of suspected account takeover. This is a powerful mechanism for holding companies back from over-reliance on automated account suspensions.
- Adopt a meaningful appeal process. If a user’s PayPal account is shut down, they should have an opportunity to appeal to a person that was not involved in the initial decision to shut down the account.
Brandt agreed that part of the problem boils down to PayPal failing to prioritize the experience of users: “Good customer service and common sense would have suggested that they call me and discuss my PayPal activities or at least send me an email to tell me to stop. Then the company would be better equipped to make an informed decision about banning. But I think customer service is not so much in their best interests.”
Increased transparency into the patterns of financial censorship will help human rights advocates analyze patterns of abuse among financial intermediaries, and scrutiny from civil society can act as a balancing force against companies which are otherwise not incentivized to keep accounts on. For every example such as Brandt’s, in which a financial account was summarily shuttered without any opportunity to appeal, there are likely countless others that EFF doesn’t hear about or have an opportunity to document.
For now, Brandt is not backing down. While he can’t use PayPal anymore, he’s still committed to supporting the Tor network by continuing to pay for servers around the world using alternative means, and he urges other people to think about what they can do to help support Tor in the future: “Tor is of critical importance for anyone requiring anonymity of location or person….I'm talking about millions of people in China, Iran, Syria, Belarus, etc. that wish to communicate outside their country but have prohibitions against such activities. We need more incentives to add to the Tor project, not fewer.” For answers to many common questions about relay operation and the law, see the EFF Tor Legal FAQ.
End-to-end encryption is under attack in India. The Indian government’s new and dangerous online intermediary rules forcing messaging applications to track—and be able to identify—the originator of any message, which is fundamentally incompatible with the privacy and security protections of strong encryption. Three petitions have been filed (Facebook; WhatsApp; Arimbrathodiyil) asking the Indian High Courts (in Delhi and Kerala) to strike down these rules.
The traceability provision—Rule 4(2) in the “Intermediary Guidelines and Digital Media Ethics Code” rules (English version starts at page 19)—was adopted by the Ministry of Electronics and Information Technology earlier this year. The rules require any large social media intermediary that provides messaging “shall enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received, or stored in any computer resource.)
The minister has claimed that the rules will “[not] impact the normal functioning of WhatsApp” and said that “the entire debate on whether encryption would be maintained or not is misplaced” because technology companies can still decide to use encryption—so long as they accept the “responsibility to find a technical solution, whether through encryption or otherwise” that permits traceability. WhatsApp strongly disagrees, writing that "traceability breaks end-to-end encryption and would severely undermine the privacy of billions of people who communicate digitally."
The Indian government's assertion is bizarre because the rules compel intermediaries to know information about the content of users’ messages that they currently don’t and which is currently protected by encryption. This legal mandate seeks to change WhatsApp’s security model and technology, and the assumptions somehow seem to imply that such matter needn’t matter to users and needn’t bother companies.
That’s wrong. Because WhatsApp uses a specific privacy-by-design implementation that protects users’ secure communication by making forwarding indistinguishable for the private messaging app from other kinds of communications. So when a WhatsApp user forwards a message using the arrow, it serves to mark the forward information at the client-side, but the fact that the message has been forwarded is not visible to the WhatsApp server. The traceability mandate would make WhatsApp change the application to make this information, which was previously invisible to the server, now visible.
The Indian government also defended the rules by noting that legal safeguards restrict the process of gaining access to the identity of a person who originated a message, that such orders can only be issued for national security and serious crime investigations, and on the basis that “it is not any individual who can trace the first originator of the information.” However, messaging services do not know ahead of time which messages will or will not be subject to such orders; as WhatsApp has noted,
there is no way to predict which message a government would want to investigate in the future. In doing so, a government that chooses to mandate traceability is effectively mandating a new form of mass surveillance. To comply, messaging services would have to keep giant databases of every message you send, or add a permanent identity stamp—like a fingerprint—to private messages with friends, family, colleagues, doctors, and businesses. Companies would be collecting more information about their users at a time when people want companies to have less information about them.
India's legal safeguards will not solve the core problem:
The rules represent a technical mandate for companies to re-engineer or re-design their systems for every user, not just for criminal suspects.
The overall design of messaging services must change to comply with the government's demand to identify the originator of a message. Such changes move companies away from privacy-focused engineering and data minimization principles that should characterize secure private messaging apps.
This provision is one of many features of the new rules that pose a threat to expression and privacy online, but it’s drawn particular attention because of the way it comes into collision with end-to-end encryption. WhatsApp previously wrote:
“Traceability” is intended to do the opposite by requiring private messaging services like WhatsApp to keep track of who-said-what and who-shared-what for billions of messages sent every day. Traceability requires messaging services to store information that can be used to ascertain the content of people’s messages, thereby breaking the very guarantees that end-to-end encryption provides. In order to trace even one message, services would have to trace every message.
Rule 4(2) applies to WhatsApp, Telegram, Signal, iMessage, or any “significant social media intermediaries” with more than 5 million registered users in India. It can also apply to federated social networks such as Mastodon or Matrix if the government decides these pose a “material risk of harm” to national security (rule 6). Free and open-source software developers are also afraid that they’ll be targeted next by this rule (and other parts of the intermediary rules), including for developing or operating more decentralized services. So Facebook and WhatsApp aren’t the only ones seeking to have the rules struck down; a free software developer named Praveen Arimbrathodiyil, who helps run community social networking services in India, has also sued, citing the burdens and risks of the rules for free and open-source software and not-for-profit communications tools and platforms.
This fight is playing out across the world. EFF has long said that end-to-end encryption, where intermediaries do not know the content of users’ messages, is a vitally important feature for private communications, and has criticized tech companies that don’t offer it or offer it in a watered-down or confusing way. Its end-to-end messaging encryption features are something WhatsApp is doing right—following industry best practices on how to protect users—and the government should not try to take this away.
Virtual worlds are increasingly providing sophisticated, realistic, and often immersive experiences that are the stuff of fantasy. You can enter them by generating an avatar - a representation of the user that could take the form of an animal, a superhero, a historic figure, each some version of yourself or the image you’d like to project. You can often choose to express yourself by selecting how to customize your character. For many, Avatar customization is key for satisfying and immersive gameplay or online experience. Avatars used to be relatively crude, even cartoonish representations, but they are becoming increasingly life-like, with nuanced facial expressions backed by a wealth of available emotes and actions. Most games and online spaces now offer at least a few options for choosing your avatar, with some providing in-depth tools to modify every aspect of your digital representation.
There is a broad array of personal and business applications for these avatars as well- from digital influencers, celebrities, customer service representatives, to your digital persona in the virtual workplace. Virtual reality and augmented reality promise to take avatars to the next level, allowing the avatar’s movement to mirror the user’s gestures, expressions, and physicality.
The ability to customize how you want to be perceived in a virtual world can be incredibly empowering. It enables embodying rich personas to fit the environment and the circumstances or adopting a mask to shield your privacy and personal self from what you wish to make public. You might use one persona for gaming, another for in a professional setting, a third for a private space with your friends.
An avatar can help someone remove constraints imposed on them by wider societal biases. For example trans and gender non-conforming individuals can more accurately reflect their true self, relieving the effects of gender dysphoria and transphobia, which has shown therapeutic benefits. For people with disabilities, avatars can allow participants to pursue unique activities through which they can meet and interact with others. In some cases, avatars can help avoid harassment. For example, researchers found some women choose male avatars to avoid misogyny in World of Warcraft.
Facebook, owner of Oculus VR and heavily investing in AR, has highlighted its technical progress in one Facebook Research project called Codec Avatar. The Codec Avatars research project focuses on ultra-realistic avatars, potentially modeled directly on users’ bodies, and modeling the user’s voice, movements, and likeness, looking to power the ‘future of connection’ with avatars that enable what Facebook calls ‘social presence’ in their VR platform.
Social presence combines the telepresence aspect of a VR experience and the social element of being able to share the experience with other people. In order to deliver what Facebook envisions for an “authentic social connection” in virtual reality, you have to pass the mother test: ‘your mother has to love your avatar before the two of you feel comfortable interacting as you would in real life’, as Yaser Sheikh, Director of Research at Facebook Reality Labs, put it.
While we’d hope your mother would love whatever avatar you make, Facebook seems to mean the Codec Avatars are striving to be indistinguishable from their human counterparts–a “picture-perfect representation of the sender’s likeness,” that has the “unique qualities that make you instantly recognizable,” as captured by a full-body scan, and animated by ego-centric surveillance. While some may prefer exact replicas like these, the project is not yet embracing a future that allows people the freedom to be whoever they want to be online.
By contrast, Epic Games has introduced MetaHumans, which also allows lifelike animation techniques via its Unreal Engine and motion capture, but does not require a copy of the user. Instead, it allows the user the choice to create and control how they appear in virtual worlds.
Facebook’s plan for Codec Avatars is to verify users with “through a combination of user authentication, device authentication, and hardware encryption, and is “exploring the idea of securing future avatars through an authentic account.” This obsession with authenticated perfect replicas mirrors Facebook’s controversial history with insisting on “real names”, later loosened somewhat to allow “authentic names,” without resolving the inherent problems Indeed, Facebook’s insistence to tie your Oculus account with your Facebook account (and its authentic name) already brings these policies together, for the worse. If Facebook insists on indistinguishable avatars, tied to a Facebook “authentic” account, in its future of social presence, this will put the names policy on steroids.
Facebook should respect the will of individuals not to disclose their real identities online.
Until the end of next year, Oculus still allows existing users to keep a separate unique VR profile to log in, which does not need to be your Facebook name. With Facebook login, users can still set their name as visible to ‘Only Me’ in Oculus settings, so that at least people on Oculus won’t be able to find you by your Facebook name. But this is a far cry from designing an online identity system that celebrates the power of avatars to enable people to be who they want to be.Lifelike Avatars and Profiles of Faces, Bodies, and Behavior on the Horizon
A key part of realistic avatars is mimicking the body and facial expression of the user, derived from collecting your non-verbal and body cues (the way you frown, tilt your head, or move your eyes) and your body structure and motion. Facebook Modular Codec Avatar system seeks to make “inferences about what a face should look like” to construct authentic simulations; compared to the original Codec Avatar system which relied more on direct comparison with a person.
While still a long way from the hyper-realistic Codec Avatar project, Facebook has recently announced a substantial step down that path, rolling out avatars with real-time animated gestures and expressions for Facebook’s virtual reality world, Horizon. These avatars will later be available for other Quest app developers to integrate into their own work.
Facebook's Codec Avatar research suggests that it will eventually require a lot of sensitive information about its users’ faces and body language: both their detailed physical appearance and structure (to recognize you for authentication applications, and to produce photorealistic avatars, including full-body avatars), and our moment-to-moment emotions and behaviors, captured in order to replicate them realistically in real-time in a virtual or augmented social setting.
While this technology is still in development, the inferences coming from these egocentric data collection practices require even stronger human rights protections. Algorithmic tools can leverage the platform’s intimate knowledge of their users, assembled from thousands of seemingly unrelated data points and making inferences drawn from both individual and collective behavior.
Research using unsophisticated cartoon avatars suggested that avatars may accurately infer some personality traits of the user. Animating hyper-realistic avatars of natural persons, such as Facebook’s Codec Avatars, will need to collect much more personal data. Think of it like walking around strapped to a dubious lie-detector, which measures your temperature, body responses, and heart-rate, as you go about your day.
Inferences based on egocentric collection of data about users’ emotions, attention, likes, or dislikes provide platforms the power to control what your virtual vision sees, how your virtual body looks like, and how your avatar can behave. While wearing your headsets, you will see the 3D world through a lens made by those who control the infrastructure.Realistic Avatars Require Robust Security
Hyper-realistic avatars also raise concerns about “deep fakes”. Right now, deep fakes involving a synthetic video or audio “recording” may be mistaken for a real recording of the people it depicts. The unauthorized use of an avatar could also be confused with the real person it depicts. While any avatar, realistic or not, may be driven by a third party, hyper-realistic avatars, with human-like expressions and gestures, can more easily build trust. Worse, in a dystopian future, realistic avatars of people you know could be animated automatically, for advertising or influencing opinion. For example, imagine an uncannily convincing ad where hyper-realistic avatars of your friends swoon over a product, or where an avatar of your crush tells you how good you’ll look in a new line of clothes. More nefariously, hyper-realistic avatars of familiar people could be used for social engineering, or to draw people down the rabbit hole of conspiracy theories and radicalization.
‘Deep fake’ issues with a third party independently making a realistic fake depiction of a real person are well covered by existing law. The personal data captured to make ultra-realistic avatars, which is not otherwise readily available to the public, should not be used to act out expressions or interactions that people did not actually consent to present. To protect against this and put the user in charge of their experience, users must have strong security measures around the use of their accounts, what data is collected and how this data is used
A secure system for authentication does not require a verified match to one’s offline self. For some, of course, a verification linked to an offline identity may be valuable, but for others, the true value may lay in a way to connect without revealing their identity. Even if a user is presenting differently from their IRL body, they may still want to develop a consistent reputation and goodwill with their avatar persona, especially if it is used across a range of experiences. This important security and authentication can be provided without requiring a link to an authentic name account, or verification that the avatar presented matches the offline body.
For example, the service could verify if the driver of the avatar was the same person who made it, without simultaneously revealing who the driver was offline. With appropriate privacy controls and data use limitations, a VR/AR device is well-positioned to verify the account holder biometrically, and thereby verify a consistent driver, even if that was not matched to an offline identity.Transparency and User Control Are Vital for the Avatars of the Virtual World
In the era of life-like avatars, it is even more important for users to have transparency and control from companies on the algorithms that underpin why their avatar will behave in specific ways, and to provide strong users control over the use of inferences.
Likewise, the second principle (“Provide controls that matter”) does not necessarily ensure that you as a user will have the controls over everything you think matters. One might debate over what falls into the category of things that “matter” enough to have controls, like the biometric data collected or the inferences generated by the service, or the look of one’s avatar. This is particularly important when there can be so much data collected in a life-like avatar, and raises critical questions on how it could be used, even as the tech is in its infancy. For example, if the experience requires an avatar that's designed to reflect your identity, what is at stake inside the experience is your sense of self. The platform won't just control the quality of the experience you observe (like watching a movie), but rather control an experience that has your identity and sense of self at its core. This is an unprecedented ability to potentially produce highly tailored forms of psychological manipulation according to your behavior in real-time.
Without strong user controls, social VR platforms or third-party developers may be tempted to use this data for other purposes, including psychological profiling of users’ emotions, interests, and attitudes, such as detecting nuances of how people feel about particular situations, topics, or other people. It could be used to make emotionally manipulative content that subtly mirrors the appearance or mannerisms of people close to us, perhaps in ways we can’t quite put our fingers on.
Data protection laws, like the GDPR, require that personal data collected for a specific purpose (like making your avatar more emotionally realistic in a VR experience) should not be used for other purposes (like calibrating ads to optimize your emotional reactions to them or mimicking your mannerisms in ads shown to your friends).
While Facebook’s VR/AR policies for third-party developers prevent them, and rightly so, from using Oculus user data for marketing or advertising, among other things, including performing or facilitating surveillance for law enforcement purposes (without a valid court order), attempting to identify a natural person and combining user data with data from a third-party; the company has not committed to these restrictions, or to allowing strong user controls, on its own uses of data.
Facebook should clarify and expand upon their principles, and confirm they understand that transparency and controls that “matter” include transparency about and control over not only the form and shape of the avatar but also the use or disclosure of the inferences the platform will make about users (their behavior, emotions, personality, etc.), including the processing of personal data running in the background.
We urge Facebook to give users control and put people in charge of their experience. The notion that people must replicate their physical forms online to achieve the “power of connection,” fails to recognize that many people wish to connect in a variety of ways– including the use of different avatars to express themselves. For some, their avatar may indeed be a perfect replica of their real-world bodies. Indeed, it is critical for inclusion to allow avatar design options that reflect the diversity of users. But for others, their authentic self is what they’ve designed in their minds or know in their hearts. And are finally able to reflect in glorious high resolution in a virtual world.
To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.
To celebrate 30 years of defending online freedom, EFF was proud to welcome NSA whistleblower Edward Snowden for a chat about surveillance, privacy, and the concrete ways we can improve our digital world, as part of our EFF30 Fireside Chat series. EFF Executive Director Cindy Cohn, EFF Director of Engineering for Certbot Alexis Hancock, and EFF Policy Analyst Matthew Guariglia weighed in on the way the internet (and surveillance) actually function, the impact that has on modern culture and activism, and how we’re grappling with the cracks this pandemic has revealed—and widened—in our digital world.
On June 3, we’ll be holding our fourth EFF30 Fireside Chat, on how to free the internet, with net neutrality pioneer Gigi Sohn. EFF co-founder John Perry Barlow once wrote, "We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before." This year marked the 25th anniversary of this audacious essay denouncing centralized authority on the blossoming internet. But modern tech has strayed far from the utopia of individual freedom that 90s netizens envisioned. We'll be discussing corporatization, activism, and the fate of the internet, framed by Barlow's "Declaration of the Independence of Cyberspace," with Gigi, along with EFF Senior Legislative Counsel Ernesto Falcon and EFF Associate Director of Policy and Activism Katharine Trendacosta.
Snowden opened the discussion by explaining the reality that all of our internet usage is made up of a giant mesh of companies and providers. The internet is not magic—it’s other people’s computers: “All of our communications—structurally—are intermediated by other people’s computers and infrastructure…[in the past] all of these lines that you were riding across—the people who ran them were taking notes.” We’ve come a long way from that time when our communications were largely unencrypted, and everything you typed into the Google search box “was visible to everybody else who was on that Starbucks network with you, and your Internet Service Provider, who knew this person who paid for this account searched for this thing on Google….anybody who was between your communications could take notes.”%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FPYRaSOIbiOA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
How Can Tech Protect Us from Surveillance?
In 2013, Snowden came forward with details about the PRISM program, through which the NSA and FBI worked directly with large companies to see what was in individuals' internet communications and activity, making much more public the notion that our digital lives were not safe from spying. This has led to a change in people’s awareness of this exploitation, Snowden said, and myriad solutions have come about to solve parts of what is essentially an ecosystem problem: some technical, some legal, some political, some individual. “Maybe you install a different app. Maybe you stop using Facebook. Maybe you don’t take your phone with you, or start using an encrypted messenger like Signal instead of something like SMS.”
Nobody sells you a car without brakes—nobody should sell you a browser without security.
When it comes to the legal cases, like EFF’s case against the NSA, the courts are finally starting to respond. Technical solutions, like the expansion of encryption in everyday online usage, are also playing a part, Alexis Hancock, EFF’s Director of Engineering for Certbot, explained. “Just yesterday, I checked on a benchmark that said that 95% of web traffic is encrypted—leaps and bounds since 2013.” In 2015, web browsers started displaying “this site is not secure” messages on unencrypted sites, and that’s where EFF’s Certbot tool steps in. Certbot is a “free, open source software that we work on to automatically supply free SSL, or secure, certificates for traffic in transit, automating it for websites everywhere.” This keeps data private in transit—adding a layer of protection over what is traveling between your request and a website’s server. Though this is one of the things that don’t get talked about a lot, partly because these are pieces that you don’t see and shouldn’t have to see, but give people security. “Nobody sells you a car without brakes—nobody should sell you a browser without security.”%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcJWq6ub0CQs%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
Balancing the Needs of the Pandemic and the Dangers of Surveillance
We’ve moved the privacy needle forward in many ways since 2013, but in 2020, a global catastrophe could have set us back: the COVID-19 pandemic. As Hancock described it, EFF’s focus for protecting privacy during the pandemic was to track “where technology can and can’t help, and when is technology being presented as a silver bullet for certain issues around the pandemic when people are the center for being able to bring us out of this.”
There is a looming backlash of people who have had quite enough.
Our fear was primarily scope creep, she explained: from contact tracing to digital credentials, many of these systems already exist, but we must ask, “what are we actually trying to solve here? Are we actually creating more barriers to healthcare?” Contact tracing, for example, must put privacy first and foremost—because making it trustworthy is key to making it effective.%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FR9CIDUhGOgU%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
The Melting Borders Between Corporate, Government, Local, and Federal Surveillance
But the pandemic, unfortunately, isn’t the only nascent danger to our privacy. EFF’s Matthew Guariglia described the merging of both government and corporate surveillance, and federal and local surveillance, that's happening around the country today: “Police make very effective marketers, and a lot of the manufacturers of technology are counting on it….If you are living in the United States today you are likely walking past or carrying around street level surveillance everywhere you go, and this goes double if you live in a concentrated urban setting or you live in an overpoliced community.”
Police make very effective marketers, and a lot of the manufacturers of technology are counting on it
From automated license plate readers to private and public security cameras to Shotspotter devices that listen for gunshots but also record cars backfiring and fireworks, this matters now more than ever, as the country reckons with a history of dangerous and inequitable overpolicing: “If a Shotspotter misfires, and sends armed police to the site of what they think is a shooting, there is likely to be a higher chance for a more violent encounter with police who think they’re going to a shooting.” This is equally true for a variety of these technologies, from automated license plate readers to facial recognition, which police claim are used for leads, but are too often accepted as fact.
“Should we compile records that are so comprehensive?” asked Snowden about the way these records aren’t only collected, but queried, allowing government and companies to ask for the firehose of data. “We don’t even care what it is, we interrelate it with something else. We saw this license plate show up outside our store at a strip mall and we want to know how much money they have.” This is why the need for legal protections is so important, added Executive Director Cindy Cohn: “The technical tools are not going to get to the place where the phone company doesn’t know where your phone is. But the legal protections can make sure that the company is very limited in what they can do with that information—especially when the government comes knocking.”%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FcLlVb_W8OmA%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
After All This, Is Privacy Dead?
All these privacy-invasive regimes may lead some to wonder if privacy, or anonymity, are, to put it bluntly, dying. That’s exactly what one audience member asked during the question and answer section of the chat. “I don’t think it’s inevitable,” said Guariglia. “There is a looming backlash of people who have had quite enough.” Hancock added that optimism is both realistic and required: “No technology makes you a ghost online—none of it, even the most secure, anonymous-driven tools out there. And I don’t think that it comes down to your own personal burden...There is actually a more collective unit now that are noticing that this burden is not yours to bear...It’s going to take firing on all cylinders, with activism, technology, and legislation. But there are people fighting for you out there. Once you start looking, you’ll find them.”
If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.
“So many people care,” Snowden said. “But they feel like they can’t do anything….Does it have to be that way?...Governments live in a permissionless world, but we don’t. Does it have to be that way?” If you’re looking for a lever to pull—look at the presumptions these mass data collection systems make, and what happens if they fail: “They do it because mass surveillance is cheap...could we make these systems unlawful for corporations, and costly [for others]? I think in all cases, the answer is yes.”%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEaeKVAbMO6s%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com
Democracy, social movements, our relationships, and your own well being all require private space to thrive. If you missed this chat, please take an hour to watch it—whether you’re a privacy activist or an ordinary person, it’s critical for the safety of our society that we push back on all forms of surveillance, and protect our ability to communicate, congregate, and coordinate without fear of reprisal. We deeply appreciate Edward Snowden joining us for this EFF30 Fireside Chat and discussing how we can fight back against surveillance, as difficult as it may seem. As Hancock said (yes, quoting the anime The Last Airbender): “If you look for darkness, that’s all you’ll ever see. But if you look for lightness, you will find it.”
Check out additional recaps of EFF's 30th anniversary conversation series, and don't miss our next program where we'll tackle digital access and the open web with Gigi Sohn on June 3, 2021—EFF30 Fireside Chat: Free the Internet.
A year ago today, President Trump issued an Executive Order that deputized federal agencies to retaliate against online social media services on his behalf, a disturbing and unconstitutional attack on internet free expression.
To mark this ignoble anniversary, EFF and the Center for Democracy & Technology are making records from their Freedom of Information Act lawsuit over the Executive Order public. The records show how Trump planned to leverage more than $117 million worth of government online advertising to stop platforms from fact-checking or otherwise moderating his speech.
Although the documents released thus far do not disclose whether government officials cut federal advertising as the Executive Order directed, they do show that the agencies’ massive online advertising budgets could easily be manipulated to coerce private platforms into adopting the president or the government’s preferred political views.
President Trump’s Executive Order was as unconstitutional as it was far-reaching. It directed independent agencies like the FCC to start a rulemaking to undermine legal protections for users’ speech online. It also ordered the Department of Justice to review online advertising spending by all federal agencies to consider whether certain platforms receiving that money were “problematic vehicles for government speech.”
President Biden rescinded the order earlier this month and directed federal agencies to stop working on it. President Biden’s action came after several groups challenging the Executive Order in court called on the president to withdraw the order. (Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in one of those challenges, Rock The Vote v. Biden). EFF applauds President Biden for revoking President Trump’s illegal order, and we hope to be able to say more soon about what impact the rescission will have on Rock The Vote, which is pending before the U.S. Court of Appeals for the Ninth Circuit.Trump sought to punish online platforms
Despite President Biden’s rescission, the order remains an unprecedented effort by a sitting president to use the federal government’s spending powers to punish private services for countering President Trump’s lies or expressing different views. One section of the order directed the Office of Management and Budget to collect reports from all federal agencies documenting their online advertising spending. The DOJ, according to the order, was then to review that spending and consider each platform’s “viewpoint-based speech restrictions,” and implicitly, recommend cutting advertising on platforms it determined to be problematic.
EFF and CDT filed their FOIA lawsuit against OMB and the DOJ in September of last year so that the public could understand whether Trump followed through on his threats to cut federal advertising on services he did not like. Here’s what we’ve learned so far.
Documents released by OMB show that federal agencies spent $117,363,000 to advertise on online services during the 2019 fiscal year. The vast majority of the government’s spending went to two companies: Google received $55,364,000 and Facebook received $46,827,000.
In contrast, federal agencies spent $7,745,000 on Twitter in 2019, despite the service being the target of Trump’s ire for appending fact-checking information to his tweets spreading lies about mail-in voting.
The documents also show which agencies reported the most online advertising spending. The Department of Defense spent $36,814,000 in 2019, with the Departments of Health and Human Services and Homeland Security spending $16,649,000 and $12,359,000, respectively. The Peace Corps reported spending $449,000 during the same period.
The documents also show that federal agencies paid online services for a variety of purposes. The FBI spent $199,000 on LinkedIn, likely on recruiting and hiring, with the federal government spending a total of $4,840,000 on the platform. And the Department of Education spent $534,000 on advertising in 2019 as part of campaigns around federal student aid and loan forgiveness.
Stepping back, it’s important to remember that the millions of dollars in federal advertising spent at these platforms gives the government a lot of power. The government has wide leeway to decide where it wants to advertise. But when officials cut, or merely threaten to cut, advertising with a service based on its supposed political views, that is coercive, retaliatory, and unconstitutional.
The government’s potential misuse of its spending powers was one reason EFF joined the Rock The Vote legal team. Federal officials may try to recycle Trump’s tactics in the future to push platforms to relax or change their content moderation policies, so it’s important that courts rule that such ploys are illegal.
It is also why EFF and CDT are pushing for the release of more records from OMB and the DOJ, so the public can fully understand President Trump’s unconstitutional actions. The DOJ, which was charged with reviewing the advertising spending and recommending changes, has only released a few dozen pages of emails thus far. And those communications provide little insight into how far the DOJ went in implementing the order, on top of redacting information the agency cannot withhold under FOIA.
We look forward to publishing more documents in the future so that public can understand the full extent of Trump’s unconstitutional effort to retaliate against online services.Related Cases: Rock the Vote v. Trump