When it comes to online services, there are a few very large companies whose gravitational effects can alter the entire tech universe. Their size, power, and diverse levers of control mean that there is no single solution that will put right that which they’ve thrown out of balance. One thing is clear—having such large companies with control over so much of our data is not working for users, not working for privacy or freedom of expression, and it’s blocking the normal flow of competition. These giants need to be prevented from using their tremendous power to just buy up competitors, so that they have to actually compete, and so that new competitors are not incentivized to just be be acquired. Above all, these giants need to be pushed to make it easy for users to leave, or to use other tools to interact with their data without leaving entirely.
In recognition of this reality, the House Judiciary Committee has released a number of proposed laws which would reign in the largest players in the tech space in order to make a healthier, more competitive internet ecosystem. We’ll have more in-depth analysis of all of them in the coming weeks, but our initial thoughts focus on the proposal which would make using a service on your own terms, or moving between services, much easier: the ACCESS Act.
The “Augmenting Compatibility and Competition by Enabling Service Switching Act”—or ACCESS Act—helps accomplish a goal we’ve long promoted as central to breaking the hold large tech companies have on our data and our business: interoperability.
Today too many tech companies are “roach motels” where our data enters but can never leave, or be back under our control. They run services where we only get the features that serve their shareholders’ interests, not our needs. This stymies other innovators, especially those who could move beyond today's surveillance business models. The ACCESS Act creates a solid framework for change.Privacy and Agency: Making Interoperability Work for Users
These services have vast troves of information about our lives. The ACCESS Act checks abuse of that data by enforcing transparency and consent. The bill mandates that platforms of a certain size and type make it possible for a user to leave that service and go to a new one, taking some or even all their data with them, while still maintaining the ability to socialize with the friends, customers, colleagues and communities who are still using the service. Under the bill, a user can request the data for themselves or, with affirmative consent, have it moved for them.
Interoperability means more data sharing, which can create new risks: we don't want more companies competing to exploit our data. But as we’ve written, careful safeguards on new data flows can ensure that users have the first and final word on what happens to their information. The guiding principle should be knowing and clear consent.
First, sensitive data should only be moved at the direction of the users it pertains to, and companies shouldn’t be able to use interoperability to expand their nonconsensual surveillance. That’s why the bill includes a requirement for affirmative consent before a user’s data can be ported. It also forbids any secondary use or sharing of the data that does get shared—a crucial corollary that will ensure data can’t be collected for one purpose, then sold or used for something else.
Furthermore, the bill requires covered platforms to not make changes to their interoperability interfaces without approval from the Federal Trade Commission (FTC), except in emergencies. That’s designed to prevent Facebook or other large platforms from making sudden changes that pull the rug out from under competitors. But there are times that the FTC cannot act quickly enough to approve changes. In the event of a security vulnerability or similar privacy or security emergency, the ACCESS act would allow platforms to address the problem without prior FTC approval.
The bill is not perfect. It lacks some clarity about how much control users will have over ongoing data flows between platforms and their competitors, and it should make it 100% clear that “interoperability” can’t be construed to mean “surveillance advertising.” It also depends on an FTC that has enough staff to promote, rather than stymie, innovation in interoperable interfaces. To make sure the bill’s text turns into action, it should also have a private right of action. Private rights of action allow users themselves to sue a company that fails to abide by the law. This means that users themselves can hold companies accountable in the courts, instead of relying on the often overstretched, under-resourced FTC. It’s not that the FTC should not have oversight power, but that the bill would be strengthened by adding another form of oversight.
Put simply: the ACCESS Act needs a private right of action so that those of us stuck inside dominant platforms, or pounding on the door to innovate alongside or in competition with them, are empowered to protect ourselves.
The bill introduced today is a huge step in bringing much-needed competition to online services. While we believe there are things missing, we are glad to see so many problems being addressed.
The California legislature has been handed what might be their easiest job this year, and they are refusing to do it.
Californians far and wide have spent the pandemic either tethered to their high-speed broadband connections (if they’re lucky), or desperately trying to find ways to make their internet ends meet. School children are using the wifi in parking lots, shared from fast food restaurants. Mobile broadband isn’t cutting it, as anyone who’s been outside of a major city and tried to make a video call on their phone can tell you. Experts everywhere insist we need a bold plan that gives communities, organizations, and nonprofits the ability and the funds to build fiber infrastructure that will serve those individuals who aren’t on the radar of the big telecommunications companies.
Take 60 Seconds to Call Your RepRESENTATIVES Today
Luckily, the California legislature has, sitting on their desks, $7 billion to spend on this public broadband infrastructure. This includes $4 billion to construct a statewide, open-access middle-mile network using California’s highway and utility rights of way. It's a plan that would give California—the world’s fifth largest economy, which is heavily dependent on high-speed internet—one of the largest public broadband fiber networks in the country.
This plan needs only a simple majority to pass. But while Californians are mostly captive to the big telecom and cable companies for whatever high-speed investment they’ve decided will be most profitable, the legislature is captive in a different way: Comcast, AT&T, and other telcos are traditionally some of the biggest lobbyists in the country, and their influence is particularly strong in California. We must convince the legislature to pass Governor Newsom’s plan for a long-term, future-proof investment in our communities. One-thousand Californians have already reached out to their representatives to demand that they take action. We need everyone—you, your friends, your family, and anyone else you know in California—to double that number. Speak up today before the legislature decides to sit this one out. Inaction could force California to lose federal dollars for the project. Every day we don’t move forward is another day lost. The state should be breaking ground as soon as possible for what will undoubtedly be a years-long infrastructure project.
TAKE 60 SECONDS TO CALL YOUR REPRESENTATIVES TODAY
If you're unable to call, please send an email. If you can, do both — the future of California's high-speed internet depend on it.
In Privacy Without Monopoly: Data Protection and Interoperability, we took a thorough look at the privacy implications of various kinds of interoperability. We examined the potential privacy risks of interoperability mandates, such as those contemplated by 2020’s ACCESS Act (USA), the Digital Services Act and Digital Markets Act (EU), and the recommendations presented in the Competition and Markets Authority report on online markets and digital advertising (UK).
We also looked at the privacy implications of “competitive compatibility” (comcom, AKA adversarial interoperability), where new services are able to interoperate with existing incumbents without their permission, by using reverse-engineering, bots, scraping, and other improvised techniques common to unsanctioned innovation.
Our analysis concluded that while interoperability created new privacy risks (for example, that a new firm might misappropriate user data under cover of helping users move from a dominant service to a new rival), these risks can largely be mitigated with thoughtful regulation and strong enforcement. More importantly, interoperability also had new privacy benefits, both because it made it easier to leave a service with unsuitable privacy policies, and because this created real costs for dominant firms that did not respect their users’ privacy: namely, an easy way for those users to make their displeasure known by leaving the service.
Critics of interoperability (including the dominant firms targeted by interoperability proposals) emphasize the fact that weakening a tech platform’s ability to control its users weakens its power to defend its users.
They’re not wrong, but they’re not complete either. It’s fine for companies to defend their users’ privacy—we should accept nothing less—but the standards for defending user-privacy shouldn’t be set by corporate fiat in a remote boardroom, they should come from democratically accountable law and regulation.
The United States lags in this regard: Americans whose privacy is violated have to rely on patchy (and often absent) state privacy laws. The country needs—and deserves—a strong federal privacy law with a private right of action.
That’s something Europeans actually have. The General Data Protection Regulation (GDPR), a powerful, far-reaching, and comprehensive (if flawed and sometimes frustrating) privacy law came into effect in 2018.
The European Commission’s pending Digital Services Act (DSA) and Digital Markets Act (DMA) both contemplate some degree of interoperability, prompting two questions:
- Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy? And
- Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?
We think the answers are “no” and “no,” respectively. Below, we explain why.Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy?
Increased interoperability can help to address user lock-in and ultimately create opportunities for services to offer better data protection.
The European Data Protection Supervisor has weighed in on the relation between the GDPR and the Digital Markets Act (DMA), and they affirmed that interoperability can advance the GDPR’s goals.
Note that the GDPR doesn’t directly mandate interoperability, but rather “data portability,” the ability to take your data from one online service to another. In this regard, the GDPR represents the first two steps of a three-step process for full technological self-determination:
- The right to access your data, and
- The right to take your data somewhere else.
The GDPR’s data portability framework is an important start! Lawmakers correctly identified the potential of data portability to help promote competition of platform services and to reduce the risk of user lock-in by reducing switching costs for users.
The law is clear on the duty of platforms to provide data in a structured, commonly used and machine-readable format and users should have the right to transmit data without hindrance from one data controller to another. Where technically feasible, users also have the right to ask the data controller to transmit the data to another controller.
Recital 68 of the GDPR explains that data controllers should be encouraged to develop interoperable formats that enable data portability. The WP29, a former official European data protection advisory body, explained that this could be implemented by making application programme interfaces (APIs) available.
However, the GDPR’s data portability limits and interoperability shortcomings have become more obvious since it came into effect. These shortcomings are exacerbated by lax enforcement. Data portability rights are insufficient to get Europeans the technological self-determination the GDPR seeks to achieve.
The limits the GDPR places on which data you have the right to export, and when you can demand that export, have not served their purpose. They have left users with a right to data portability, but few options about where to port that data to.
Missing from the GDPR is step three:
3. The right to interoperate with the service you just left.
The DMA proposal is a legislative way of filling in that missing third step, creating a “real time data portability” obligation, which is a step toward real interop, of the sort that will allow you to leave a service, but remain in contact with the users who stayed behind. An interop mandate breathes life into the moribund idea of data-portability.Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?
The GDPR is very far-reaching, and European officials are still coming to grips with its implications. It’s conceivable that the Commission could propose a regulation that cannot be reconciled with EU data protection rules. We learned that in 2019, when the EU Parliament adopted the Copyright Directive without striking down the controversial and ill-conceived Article 13 (now Article 17). Article 17’s proponents confidently asserted that it would result in mandatory copyright filters for all major online platforms, not realizing that those filters cannot be reconciled with the GDPR.
But we don’t think that’s what’s going on here. Interoperability—both the narrow interop contemplated in the DMA, and more ambitious forms of interop beyond the conservative approach the Commission is taking—is fully compatible with European data protection, both in terms of what Europeans legitimately expect and what the GDPR guarantees.
Indeed, the existence of the GDPR solves the thorniest problem involved in interop and privacy. By establishing the rules for how providers must treat different types of data and when and how consent must be obtained and from whom during the construction and operation of an interoperable service, the GDPR moves hard calls out of the corporate boardroom and into a democratic and accountable realm.
Facebook often asserts that its duty to other users means that it has to block you from bringing some of “your” data with you if you want to leave for a rival service. There is definitely some material on Facebook that is not yours, like private conversations between two or more other people. Even if you could figure out how to access those conversations, we want Facebook to take steps to block your access and prevent you from taking that data elsewhere.
But what about when Facebook asserts that its privacy duties mean it can’t let you bring the replies to your private messages; or the comments on your public posts; or the entries in your address book; with you to a rival service? These are less clear-cut than the case of other peoples’ private conversations, but blocking you from accessing this data also helps Facebook lock you onto its platform, which is also one of the most surveilled environments in the history of data-collection.
There’s something genuinely perverse about deferring these decisions to the reigning world champions of digital surveillance, especially because an unfavorable ruling about which data you can legitimately take with you when you leave Facebook might leave you stuck on Facebook, without a ready means to address any privacy concerns you have about Facebook’s policies.
This is where the GDPR comes in. Rather than asking whether Facebook thinks you have the right to take certain data with you or to continue accessing that data from a rival platform, the GDPR lets us ask the law which kinds of data connections are legitimate, and when consent from other implicated users is warranted. Regulation can make good, accountable decisions about whether a survey app deserves access to all of the “likes” by all of its users’ friends (Facebook decided it did, and the data ended up in the hands of Cambridge Analytica), or whether a user should be able to download a portable list of their friends to help switch to another service (which Facebook continues to prevent).
The point of an interoperability mandate—either the modest version in the DMA or a more robust version that allows full interop—is to allow alternatives to high-surveillance environments like Facebook to thrive by reducing switching costs. There’s a hard collective action problem of getting all your friends to leave Facebook at the same time as you. If people can leave Facebook but stay in touch with their Facebook friends, they don’t need to wait for everyone else in their social circle to feel the same way. They can leave today.
In a world where platforms—giants, startups, co-ops, nonprofits, tinkerers’ hobbies—all treat the GDPR as the baseline for data-processing, services can differentiate themselves by going beyond the GDPR, sparking a race to the top for user privacy.Consent, Minimization and Security
We can divide all the data that can be passed from a dominant platform to a new, interoperable rival into several categories. There is data that should not be passed. For example, a private conversation between two or more parties who do not want to leave the service and who have no connection to the new service. There is data that should be passed after a simple request from the user. For example, your own photos that you uploaded, with your own annotations; your own private and public messages, etc. Then there is data generated by others about you, such as ratings. Finally, there is someone else’s personal information contained in a reply to a message you posted.
The last category is tricky, and it turns on the GDPR’s very fulcrum: consent. The GDPR’s rules on data portability clarify that exporting data needs to respect the rights and freedom of others. Thus, although there is no ban on porting data that does not belong to the requesting user, data from other users shouldn’t be passed on without their explicit consent, or under another GDPR legal basis, and without further safeguards.
That poses a unique challenge for allowing users to take their data with them to other platforms, when that data implicates other users—but it also promises a unique benefit to those other users.
If the data you take with you to another platform implicates other users, the GDPR requires that they consent to it. The GDPR’s rules for this are complex, but also flexible.
For example, say, in the future, that Facebook obtains consent from users to allow their friends to take the comments, annotations, and messages they send to those friends with them to new services. If you quit Facebook and take your data (including your friends’ contributions to it) to a new service, the service doesn’t have to bother all your friends to get their consent again—under the WP Guidelines, so long as the new service uses the data in a way that is consistent with the uses Facebook obtained consent for in the first place, that consent carries over.
But even though the new service doesn’t have to obtain consent from your friends, it does have to notify them within 30 days - so your friends will always know where their data ended up.
And the new platform has all the same GDPR obligations that Facebook has: they must only process data when they have a “lawful basis” to do so; they must practice data minimization; they must maintain the confidentiality and security of the data; and they must be accountable for its use.
None of that prevents a new service from asking your friends for consent when you bring their data along with you from Facebook. A new service might decide to do this just to be sure that they are satisfying the “lawfulness” obligations under the GDPR.
One way to obtain that consent is to incorporate it into Facebook’s own consent “onboarding”—the consent Facebook obtains when each user creates their account. To comply with the GDPR, Facebook already has to obtain consent for a broad range of data-processing activities. If Facebook were legally required to permit interoperability, it could amend its onboarding process to include consent for the additional uses involved in interop.
Of course, the GDPR does not permit far-reaching, speculative consent. There will be cases where no amount of onboarding consent can satisfy either the GDPR or the legitimate privacy expectations of users. In these cases, Facebook can serve as a “consent conduit,” through which consent to allow their friends to take data with muddled claims with them to a rival platform can be sought, obtained, or declined.
Such a system would mean that some people who leave Facebook would have to abandon some of the data they’d hope to take with them—their friends’ contact details, say, or the replies to a thread they started—and it would also mean that users who stayed behind would face a certain amount of administrative burden when their friends tried to leave the service. Facebook might dislike this on the grounds that it “degraded the user experience,” but on the other hand, a flurry of notices from friends and family who are leaving Facebook behind might spur the users who stayed to reconsider that decision and leave as well.
For users pondering whether to allow their friends to take their blended data with them onto a new platform, the GDPR presents a vital assurance: because the GDPR does not permit companies to seek speculative, blanket consent for future activities for new purposes that you haven’t already consented to, and because the companies your friends take your data to have no way of contacting you, they generally cannot lawfully make any further use of that data (except through one of the other narrow bases permitted by GDPR, for example, to fulfil a “legitimate interest”) . Your friends can still access it, but neither they, nor the services they’ve fled to, can process your data beyond the scope of the initial consent to move it to the new context. Once the data and you are separated, there is no way for third parties to obtain the consent they’d need to lawfully repurpose it for new products or services.
Beyond consent, the GDPR binds online services to two other vital obligations: “data minimization” and “data security.” These two requirements act as a further backstop to users whose data travels with their friends to a new platform.
Data minimization means that any user data that lands on a new platform has to be strictly necessary for its users’ purposes (whether or not there might be some commercial reason to retain it). That means that if a Facebook rival imports your comments to its new user’s posts, any irrelevant data that Facebook transmits along with that data (say, your location when you left the comment, or which link brought you to the post), must be discarded. This provides a second layer of protection for users whose friends migrate to new services: not only is their consent required before their blended data travels to the new service, but that service must not retain or process any extraneous information that seeps in along the way.
The GDPR’s security guarantee, meanwhile, guards against improper handling of the data you consent to let your friends take with them to new services. That means that the data in transit has to be encrypted, and likewise the data at rest, on the rival service’s servers. And no matter that the new service is a startup, it has a regulated, affirmative duty to practice good security across the board, with real liability if it commits a material omission that leads to a breach.
Without interoperability, the monopolistic high-surveillance platforms are likely to enjoy long term, sturdy dominance. The collective action problem represented by getting all the people on Facebook whose company you enjoy to leave at the same time you do means that anyone who leaves Facebook incurs a high switching cost.
Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.
Without interoperability, privacy’s best hope is to gentle Facebook, rendering it biddable and forcing it to abandon its deeply held beliefs in enrichment through nonconsensual surveillance —and to do all of this without the threat of an effective competitor that Facebook users can flee to no matter how badly it treats them.
Interoperability without privacy safeguards is a potential disaster, provoking a competition to see who can extract the most data from users while offering the least benefit in return. Every legislative and regulatory interoperability proposal in the US, the UK, and the EU contains some kind of privacy consideration, but the EU alone has a region-wide, strong privacy regulation that creates a consistent standard for data-protection no matter what measure is being contemplated. Having both components - an interoperability requirement and a comprehensive privacy regulation - is the best way to ensure interoperability leads to competition in desirable activities, not privacy invasions.
The Dartmouth Geisel School of Medicine has ended its months-long dragnet investigation into supposed student cheating, dropping all charges against students and clearing all transcripts of any violations. This affirms what EFF, The Foundation for Individual Rights in Education (FIRE), students, and many others have been saying all along: when educators actively seek out technical evidence of students cheating, whether those are through logs, proctoring apps, or other automated or computer-generated techniques, they must also seek out technical expertise, follow due process, and offer concrete routes of appeal.
The investigation at Dartmouth began when the administration conducted a flawed review of an entire year’s worth of student log data from Canvas, the online learning platform that contains class lectures and other substantive information. After a technical review, EFF determined that the logs easily could have been generated by the automated syncing of course material to devices logged into Canvas but not being used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. In this case, many of the logs related to Canvas content that wasn’t even relevant to the tests being taken, raising serious questions about Dartmouth’s allegations.
It’s unclear how many other schools have combed through Canvas logs for evidence of cheating, but the Dartmouth debacle provides clear evidence that its logging system is not meant to be used—and should not be used—as evidence in such investigations.
Along with FIRE, EFF sent a letter to Dartmouth in March laying out our concerns, including the fact that Canvas' own documentation explicitly states that the data in these logs is not intended to be used "in isolation for auditing or other high-stakes analysis involving examining single users or small samples." According to the latest email sent to the student body from the Dean of the School of Medicine, the allegations have been dropped “upon further review and based on new information received from our learning management system provider.” While Instructure, the company behind Canvas, has not responded to numerous requests we’ve sent asking them to comment on Dartmouth’s use of these logs, we are heartened to hear that it is taking misuses of its system seriously. We urge the company to take a more public stand against these sorts of investigations. It’s unclear how many other schools have combed through Canvas logs for evidence of cheating, but the Dartmouth debacle provides clear evidence that its logging system is not meant to be used—and should not be used—as evidence in such investigations.Fighting Disciplinary Technologies
Schools are not the only places where technology is being (mis)used to surveil, punish, or falsely accuse those without recourse. “Disciplinary technologies” are showing up more and more in the areas of our lives where power imbalances are common—in workplaces, in relationships, and in homes. Dartmouth is an example of the way these technologies can exacerbate already existing power dynamics, giving those in power an excuse not to take due process seriously. Students were not only falsely accused, but they were given little to no recourse to defend themselves from what the school saw as incontrovertible evidence against them. It was only after multiple experts demanded the school take a closer look at the evidence that they began to backtrack. What’s worse, only those students who had technical experts available to them had their charges quickly dropped, while those who lacked resources or connections were left with their futures in the balance, raising questions of inequity and preferential treatment.
While we’re pleased that these allegations have been dropped for all students—and pleased that, according to the dean, the school will be reviewing a proposal for open-book exams, which would eliminate the harms caused by online proctoring software—the distress this caused cannot be overstated. Several students expected their careers would be destroyed for cheating when they had not done so; others were told to admit guilt simply because it would be easier on them. Many students complained, some anonymously for fear of reprisal, of the toll these allegations were taking on their mental health. In the midst of the investigation, the school released a dangerous update to its social media policy that silenced students who were speaking out, which appears to still be an active policy. All of this could have been avoided.
We’re working at EFF to craft solutions to the problems created by disciplinary technologies and other tools that put machines in power over ordinary people, and to protect the free speech of those speaking out against their use. It will take technologists, consumers, activists, and changes in the law to course correct—but we believe the fight can be won, and today’s decision at Dartmouth gets us one step closer.
This blog post was written by Kenny Gutierrez, EFF Bridge Fellow.
Recently proposed modifications to the federal Health Insurance Portability and Accountability Act (HIPAA) would invade your most personal and intimate health data. The Office of Civil Rights (OCR), which is part of the U.S. Department of Health and Human Services (HHS), proposes loosening our health privacy protections to address misunderstandings by health professionals about currently permissible disclosures.
EFF recently filed objections to the proposed modifications. The most troubling change would expand the sharing of your health data without your permission, by enlarging the definition of “health care operations” to include “case management” and “care coordination,” which is particularly troubling since these broad terms are not defined. Additionally, the modifications seek to lower the standard of disclosure for emergencies. They also will require covered entities to disclose personal health information (PHI) to uncovered health mobile applications upon patient request. Individually, the changes are troublesome enough. When combined, the impact on the release of PHI, with and without consent, is a threat to patient health and privacy.
Trust in Healthcare is Crucial
The proposed modifications would undermine the requisite trust by patients for health professionals to disclose their sensitive and intimate medical information. If patients no longer feel their doctors will protect their PHI, they will not disclose it or even seek treatment. For example, since there is pervasive prejudice and stigma surrounding addiction, an opiate- dependent patient will probably be less likely to seek treatment, or fully disclose the severity of their condition, if they fear their diagnosis could be shared without their consent. Consequently, the HHS proposal will hinder care coordination and case management. That would increase the cost of healthcare, because of decreased preventative care in the short-term, and increased treatment in the long-term, which is significantly more expensive. Untreated mental illness costs the nation more than $100 billion annually. Currently, only 2.5 million of the 21.2 million people suffering from mental illness seek treatment.
The current HIPAA privacy rule is flexible enough, counter to the misguided assertions of some health care professionals. It protects patient privacy while allowing disclosure, without patient consent, in critical instances such as for treatment, in an emergency, and when a patient is a threat to themselves or public safety.
So, why does HHS seek to modify an already flexible rule? Two congressional hearings, in 2013 and 2015, revealed that there is significant misunderstanding of HIPAA and permissive disclosures amongst medical professionals. As a result, HIPAA is misperceived as rigidly anti-disclosure, and mistakenly framed it as a “regulatory barrier” or “burden.” Many of the proposed modifications double down on this misunderstanding with privacy deregulation, rather than directly addressing some professionals’ confusion with improved training, education, and guidance.
The HHS Proposals Would Reduce Our Health Privacy
Modifications to HIPAA will cause more problems than solutions. Here is a brief overview of the most troubling modifications:
- The proposed rule would massively expand a covered entity’s (CE) use and disclosure of personal health information (PHI) without patient consent. Specifically, it allows unconsented use and disclosure for “care coordination” and “case management,” without adequately defining these vague and overbroad terms. This expanded exception would swallow the consent requirement for many uses and disclosure decisions. Consequently, Big Data (such as corporate data brokers) would obtain and sell this PHI. That could lead to discrimination in insurance policies, housing, employment, and other critical areas because of pre-existing medical conditions, such as substance abuse, mental health illness, or severe disabilities that carry a stigma.
- HHS seeks to lower the standard of unconsented disclosure from “professional judgment” to “good faith belief.” This would undermine patient trust. Currently, a covered entity may disclose some PHI based on their “professional judgment” that it is in the individual’s best interest. The modification would lower this standard to a “good faith belief,” and apparently shift the burden to the injured individual to prove their doctor’s lack of good faith. Professional judgment is properly narrower: it is objective and grounded in expert standards. “Good faith” is both broader and subjective.
- Currently, to disclose PHI in an emergency, the standard for disclosure is “imminent” harm, which invokes a level of certainty that harm is surely impending. HHS proposes instead just “reasonably foreseeable” harm, which is too broad and permissive. This could lead to a doctor disclosing your PHI because you have a sugar-filled diet, you’re a smoker, or you have unprotected sex. Harm in such cases would not be “imminent,” but it could be “reasonably foreseeable.”
Weaker HIPAA Rules for Phone Health Apps Would Hand Our Data to Brokers
The proposed modifications will likely result in more intimate, sensitive, and highly valuable information being sent to entities not covered by HIPAA, including data brokers.
Most Americans have personal health application on their phones for health goals, such as weight management, stress management, and smoking cessation. However, these apps are not covered by HIPAA privacy protections.
A 2014 Federal Trade Commission study revealed that 12 personal health apps and devices transmitted information to 76 different third parties, and some of the data could be linked back to specific users. In addition, 18 third parties received device-specific identifiers, and 22 received other key health information.
Worse, depending on where the PHI is stored, other apps may grant themselves access to your PHI through their own separate permissions. Such permissions have serious consequences because many apps can access data on one’s device that is unrelated to what the app is supposed to do. In a study of 99 apps, researchers found that free apps included more unnecessary permissions than paid apps.
During the pandemic, we have learned once again the importance of trust in the health care system. Ignoring CDC guidelines, many people have not worn masks or practiced social distancing, which has fueled the spread of the virus. These are symptoms of public distrust of health care professionals. Trust is critical in prevention, diagnosis, and treatment.
The proposed HHS changes to HIPAA’s health privacy rules would undoubtedly lead to increased disclosures of PHI without patient consent, undermining the necessary trust the health care system requires. That’s why EFF opposes these changes and will keep fighting for your health privacy.
Imagine this: a limited liability company (LLC) is formed, for the sole purpose of acquiring patents, including what are likely to be low-quality patents of suspect validity. Patents in hand, the LLC starts approaching high-tech companies and demanding licensing fees. If they don’t get paid, the company will use contingency-fee lawyers and a litigation finance firm to make sure the licensing campaign doesn’t have much in the way of up-front costs. This helps give them leverage to extract settlements from companies that don’t want to pay to defend the matter in court, even if a court might ultimately invalidate the patent if it reached the issue.
That sounds an awful lot like a patent troll. That’s the kind of entity that EFF criticizes because they use flimsy patents to squeeze money from operating companies, rather than making their own products. Unfortunately, this description also applies to a company that has just been formed by a consortium of 15 large research universities.
This patent commercialization company has been secretly under discussion since 2018. In September 2020, it quietly went public, when the University of California Regents authorized making UC Berkeley and UCLA two of its founding members. In January, the DOJ said it wouldn’t challenge the program on antitrust grounds.
It’s good news when universities share technology with the private sector, and when startup companies get formed based on university research. That’s part of why so much university research is publicly funded. But there’s not much evidence that university patenting helps technology reach the public, and there’s a growing body of evidence that patents hinder it. Patents in this context are legal tools that allow someone to monopolize publicly-funded research and capture its promise for a private end.
While larger tech companies can absorb the cost of either litigating or paying off the patent assertion entity, smaller innovators will face a much larger burden, proportionately. That means that that the existence of this licensing entity could harm innovation and competition. When taxpayers fund research, the fruits of the research should be available for all.
With 15 universities now forming a consortium to license electronics and software patents, it’s going to be a mess for innovators and lead to worse, more expensive products.Low-Quality Patents By The Bundle
Despite the explosion in university patenting and the growth of technology transfer offices (essentially university patent offices), the great majority of universities lose money on their patents. A 2013 Brookings Institute study showed that 84% of universities didn’t make enough money from their patents to cover the related legal costs and the staffing of their tech transfer office. Just a tiny slice of universities earn the majority of patent-licensing revenue, often from a few blockbuster pharmaceutical or biotech inventions. As many as 95% of university patents do not get licensed at all.
This new university patent licensing company won’t be getting any of the small number of impressive revenue-producing patents. The proposal sent to the UC Board of Regents explains that the LLC’s goal will be to get payment for patents that “have not been successfully licensed via a bilateral ‘one patent, one license’ transaction.” The universities’ proposal is to start by licensing in three areas: autonomous vehicles, “Internet of Things,” and Big Data.
In other words, they’ll be demanding licensing fees over lots and lots of software patents. By and large, software patents are the lowest quality patents, and their rise has coincided with the rise of large-scale patent trolling.
The university LLC won’t engage in the type of patent licensing that most actual university spinoffs would want, which are typically exclusive licenses over patents that give it a product or service no one else has. Rather, “the LLC will focus on non-exclusive sublicenses.” In other words, they’ll use the threat of litigation to attempt to get all competitors in a particular industry to pay for the same patents.
This is the same model pursued by the notorious Intellectual Ventures, a large patent troll company that convinced 61 different universities to contribute at least 470 different patents to its patent pool in an attempt to earn money from patents.What about the Public Interest?
The lawyers and bureaucrats promoting the UC patent licensing scheme know how bad this looks. Their plan is to use patents as weapons, not tools for innovation—exactly the method used by patent trolls. In the “Pros and Cons” section of the memo sent to the UC Regents, the biggest “Con” is that the University of California “may incur negative publicity, e.g., allegations may arise that the LLC’s activities are tantamount to a patent troll.” That’s why the memo seeks to reassure the Regents that “it is... the expectation that no enforcement action will be undertaken against startups or small business firms.” This apparently nonbinding “expectation” is small comfort.
The goal of the patent-based LLC doesn’t seem to be to share knowledge. If the universities wanted to do that, they could do it right now. They could do it for free, or do it for a contracted payment—no patents required.
The real goal seems to be finding alleged infringers, accusing them, and raising money. The targets will know that they’re not being offered an opportunity—they’ll be under attack. That’s why the lawyers working with UC have promised the Regents that when it comes time to launch lawsuits against one of the “pre-determined targets,” they will steer clear of small businesses.
The university LLC isn’t going to license their best patents. Rather, the UC Regents memo admits that they’re planning to license the worst of them—technologies that have not been successfully licensed via a “one patent, one license” transaction by either UCLA or UC Berkeley.
To be clear, universities aren’t patent trolls. Universities are centers for teaching, research, and community. But that broader social mission is exactly why universities shouldn’t go off and form a patent-holding company that is designed to operate similarly to a patent troll.
Patents aren’t needed to share knowledge, and dealing with them has been a net loss for U.S. universities. Universities need to re-think their tech transfer offices more broadly. In the meantime, the UC Regents should withdraw from this licensing deal as soon as possible. Other universities should consider doing the same. The people who will benefit the most from this aren’t the public or even the universities, but the lawyers. For the public interest and innovation, having the nation’s best universities supply a patent-trolling operation is a disaster in the making.
The fifteen members of the University Technology Licensing Program are expected to be:
- Brown University
- California Institute of Technology (Caltech)
- Columbia University
- Cornell University
- Harvard University
- Northwestern University
- Princeton University
- State University of New York at Binghamton
- University of California, Berkeley
- University of California, Los Angeles
- University of Illinois
- University of Michigan
- University of Pennsylvania
- University of Southern California
- Yale University
As the world stays home to slow the spread of COVID-19, communities are rapidly transitioning to digital meeting spaces. This highlights a trend EFF has tracked for years: discussions in virtual spaces shape and reflect societal freedoms, and censorship online replicates repression offline. As most of us spend increasing amounts of time in digital spaces, the impact of censorship on individuals around the world is acute.
Tracking Global Online Censorship is a new project to record and combat international speech restrictions, especially where censorship policies are exported from Europe and the United States to the rest of the world. Headed by EFF Director for International Freedom of Expression Jillian York, the project will seek accountability for powerful online censors—in particular, social media platforms such as Facebook and Google—and hold them to just, inclusive standards of expressive discourse, transparency, and due process in a way that protects marginalized voices, dissent, and disparate communities.
“Social media companies make mistakes at scale that catch a range of vital expression in their content moderation net. And as companies grapple with moderating new types of content during a pandemic, these error rates will have new, dangerous consequences,” said Jillian York. “Misapplication of content moderation systems results in the systemic silencing of marginalized communities. It is vital that we protect the free flow of information online and ensure that platforms provide users with transparency and a path to remedy.”
Support for Tracking Global Online Censorship is provided by the Swedish Postcode Foundation (Svenska Postkodstiftelsen). Established in 2003, the Swedish Postcode Foundation receives part of the Swedish Postcode Lottery’s surplus, which it then uses to provide financial support to non-governmental organizations creating positive changes through concrete efforts. The Foundation’s goal is to create a better world through projects that challenge, inspire, and promote change.
“Social media is a huge part of our daily life and a primary source of information. Social media companies enjoy an unprecedented power and control and the lack of transparency that these companies exercise does not run parallel to the vision that these same companies were established for. It is time to question, create awareness, and change this. We are therefore proud to support the Electronic Frontier Foundation in their work to do so,” says Marie Dahllöf, Secretary General of the Swedish Postcode Foundation.
We are at a pivotal moment for free expression. A dizzying array of actors have recognized the current challenges posed by intermediary corporations in an increasingly global world, but a large number of solutions seek to restrict—rather than promote—the free exchange of ideas. At the same time, as COVID-19 results in greater isolation, online expression has become more important than ever, and the impact of censorship greater. The Tracking Global Online Censorship project will draw attention to the myriad issues surrounding online speech, develop new and existing coalitions to strengthen the effort, and offer policy solutions that protect freedom of expression. In the long term, our hope is for corporations to stop chilling expression, to promote free access to time-sensitive news, foster engagement, and to usher in a new era of online expression in which marginalized communities will be more strongly represented within democratic society.
This week, EFF joined with several prominent right-to-repair groups to file an amicus brief in the United States District Court of Massachusetts defending the state’s recent right-to-repair law. This law, which gives users and independent repair shops access to critical information about the cars they drive and service, passed by ballot initiative with an overwhelming 74.9% majority.
Almost immediately, automakers asked to delay the law. In November, the Alliance for Automotive Innovation, a group that includes Honda, Ford, General Motors, Toyota, and other major carmakers, sued the state over the law. The suit claims that allowing people to have access to the information generated by their own cars poses serious security risks.
This argument is nonsense, and we have no problem joining our fellow repair advocates—iFixit, The Repair Association, US PIRG, SecuRepairs.org, and Founder/Director of the Brooklyn Law Incubator and Policy Clinic Professor Jonathan Askin—in saying so.Access Is Not a Threat
The Massachusetts law requires vehicles with a telematics platform—software that collects and transmits diagnostic information about your car—to install an open data platform. The Alliance for Automotive Innovation argues that the law makes it “impossible” to comply with both the state’s data access rules and federal standards.
Nonsense. Companies in many industries must balance data access and cybersecurity rules, including for electronic health records, credit reporting, and telephone call records. In all cases, regulators have recognized the importance of giving consumers access to their own information as well as the need to protect even the most sensitive information.
In fact, in cases such as the Equifax breach, consumer access to information was key to fighting fraud, the main consequence of the data breach. Locking consumers out of accessing their own information does nothing to decrease cybersecurity risks.Secrecy Is Not Security
Automakers are also arguing that restricting access to telematics data is necessary if carmakers are to protect against malicious intrusions.
Cybersecurity experts strongly disagree. “Security through obscurity”—systems that rely primarily on secrecy of certain information to prevent illicit access or use—simply does not work. It offers no real deterrent to would-be thieves, and it can give engineers a false sense of safety that can stop them from putting real protections in place.
Furthermore, there is no evidence that expanding access to telematics data would change much about the security of information. In fact, independent repair shops aren't any more or less likely than authorized shops to leak or misuse data, according to a recent report from the Federal Trade Commission. This should not be accepted as an excuse for carmakers to further restrict competition in the repair market.The Right to Repair Enhances Consumer Protection and Competition
Throughout the debate over the Massachusetts ballot initiative, the automotive industry has resorted to scare tactics to stop this law. But the people of Massachusetts didn’t fall for the industry’s version of reality, and we urge the court not to either.
The right to repair gives consumers more control over the things they own. It also supports a healthier marketplace, as it allows smaller and independent repair shops to offer their services—participation that clearly lowers prices and raises quality.
Time and time again, people have made it clear that they want the right to repair their cars. They’ve made that clear at the ballot box, as in Massachusetts, as well as in statehouses across the country.
That’s why EFF continues to stand behind the right to repair: If you bought it, you own it. It’s your right to fix it yourself or to take it to the repair shop of your choosing. Manufacturers want the benefits that come with locking consumers into a relationship with their companies long after a sale. But their efforts to stop the right to repair stands against a healthy marketplace, consumer protection, and common sense.
EFF and FSFP to Court: When Flawed Electronic Voting Systems Disenfranchise Voters, They Should Be Able to Challenge That with Access to the Courts
Atlanta, Georgia—The Electronic Frontier Foundation (EFF) and Free Speech for People (FSPF) urged a federal appeals court today to hold that a group of Georgia voters and the organization that supports them have standing to sue the Georgia Secretary of State over the implementation of defective voting systems they say deprives them of their right to vote and have their votes counted.
EFF and FSFP filed an amicus brief siding with the plaintiffs in Curling v. Raffenberger to defend Americans’ right to challenge in court any policy or action that disenfranchises voters.
The voters in the Curling lawsuit, originally filed in 2017, are seeking to block, or otherwise require protective measures for, Georgia’s new electronic voting, which has been found to have flaws that could block or deter some voters from exercising their right to vote and cause some votes to not be counted.
After reviewing a tremendous amount of evidence and testimony, a federal judge found that problems with the system’s scanners violate Georgians’ fundamental right to vote, and flaws in electronic pollbooks impose a severe burden on the rights of voters. The court ordered the state to take specific steps to fix the problems.
Lawyers for the Georgia Secretary of State’s office are appealing the orders and seeking to have the case thrown out. They argue that the voters lack standing to sue because they can’t show that they would be personally and individually harmed by the voting system and are merely speculating about potential harms to their voting rights. The Secretary of State went so far as to say the plaintiffs are, at best, “bystanders” making merely a general grievance about alleged harms that is not sufficient for standing.
In a brief filed in U.S. District Court of Appeals for the Eleventh Circuit, EFF and FSFP said the Supreme Court has long recognized that the right to vote is personal and individual. Directly depriving people of the right to vote is a concrete and particularized injury, and the Curling plaintiffs showed how the flaws both blocked voters from voting and scanned ballots from being counted.
“The plaintiffs in this case are seeking to vindicate their own rights to vote and have their votes counted,” said EFF Executive Director Cindy Cohn. “The fact that many other people would be harmed in a similar way doesn’t change that or negate the fact that the state’s choice of voting system can disenfranchise the voters in this case.”
EFF urged the court to look at the Ninth Circuit Court decision in EFF’s landmark case Jewel vs. NSA challenging dragnet government surveillance of Americans’ phone records. The court found that AT&T customers alleging the NSA’s spying program violated their individual rights to privacy had standing to sue the government, despite the fact that the program also impacted millions of other people, reversing a lower court’s ruling that their assertions of harm were just generalized grievances.
“When government or state policies cause personal harm, that meets the standard for standing even if many others are subject to the same harms,” said Houston Davidson, EFF public interest legal fellow.
EFF and FSFP also pushed back on Georgia’s attempt to minimize the problems with its systems as mere “glitches,” akin to a snowstorm or traffic jam on Election Day that do not require deep court review. They noted that the problems proven in the case are not minor but serious, preventable, and fundamental flaws that place unacceptable burdens on voters and jeopardize the accurate counting of their votes. Passing these problems off as minor, with unserious consequences for voters, can make it easier for Georgia to convince the court that it should not intervene.
Don’t fall for it, EFF and FSFP urged the court.
“It’s outrageous and wrong for Georgia to try to dismiss the flaws in the voting system as ‘glitches,’” said Davidson. “Technology problems in Georgia’s electronic pollbooks, scanners, and overall security are systematic and predictable. These issues jeopardize voters’ rights to cast their votes and have them counted. We hope the court recognizes the flaws for what they are and confirms the plaintiffs’ rights to hold the state accountable.”
For the EFF/FSFP brief:
For more on election security:
“Black lives matter on the streets. Black lives matter on the internet.” A year ago, EFF’s Executive Director, Cindy Cohn, shared these words in EFF's statement about the police killings of Breonna Taylor and George Floyd. Cindy spoke for all of us in committing EFF to redouble its efforts to support the movement for Black lives. She promised we would continue providing guides and resources for protesters and journalists on the front lines; support our allies as they navigate the complexities of technology and the law; and resist surveillance and other high-tech abuses while protecting the rights to organize, assemble, and speak securely and freely.
Like many of you, the anniversary of George Floyd's murder has inspired us to reflect on these commitments and the work of so many courageous people who stood up to demand justice. Our world has been irrevocably changed. While there is still an immeasurably long way to go toward becoming a truly just society, EFF is inspired by this leaderful movement and humbled as we reflect on the ways in which we have been able to support its critical work.
EFF believes that people engaged in the Black-led movement against police violence deserve to hold those in power accountable and inspire others through the act of protest, without fear of police surveillance of our faces, bodies, electronic devices, and other digital assets. So, as protests began to spread throughout the nation, we worked quickly to publish a guide to cell phone surveillance at protests, including steps protesters can take to protect themselves.
We also worked with the National Lawyers Guide (NLG) to develop a guide to observing visible, and invisible, surveillance at protests—in video and blog form. The published guide and accompanying training materials were made available to participants in the NLG’s Legal Observer program. The 25-minute videos—available in English and Spanish—explain how protesters and legal observers can identify various police surveillance technologies, like body-worn cameras, drones, and automated license plate readers. Knowing what technologies the police use at a protest can help defense attorneys understand what types of evidence the police agencies may hold, find exculpatory evidence, and potentially provide avenues for discovery in litigation to enforce police accountability.
We also significantly updated our Surveillance Self-Defense guide to attending protests. We elaborated on our guidance on documenting protests, in order to minimize the risk of exposing other protesters to harmful action by law enforcement or vigilantes; gave practical tips for maintaining anonymity and physical safety in transit to and at protests; and recommended options for anonymizing images and scrubbing metadata. Documenting police brutality during protest is necessary. Our aim is to provide options to mitigate risk when fighting for a better world.Protecting the Right to Record the Police
Using our phones to record on-duty police action is a powerful way to expose and end police brutality and racism. In the words of Darnella Frazier: "My video didn't save George Floyd, but it put his murderer away and off the streets." Many have followed in her courageous footsteps. For example, Caron Nazario used his phone to film excessive police force against him during a traffic stop. Likewise, countless protesters against police abuse have used their phones to document police abuse against other protesters. As demonstrations heated up last spring, EFF published advice on how to safely and legally record police.
EFF also has filed many amicus briefs in support of your right to record on-duty police. Earlier this year, one of these cases expanded First Amendment protection of this vital tool for social change. Unfortunately, another court proceeded to dodge the issue by hiding under "qualified immunity," which is one reason EFF calls on Congress to repeal this dangerous doctrine. Fortunately, six federal appellate courts have squarely vindicated your right to film police. We'll keep fighting until every court does so.Revealing Police Surveillance of Protesters
As we learned after Occupy Wall Street, the #NoDAPL movement, and the 2014-2015 Movement for Black Lives uprisings, sometimes it takes years to learn about all the police surveillance measures used against protest movements. EFF has helped expose the local, state, federal, and private surveillance that the government unleashed on activists, organizers, and protestors during last summer’s Black-led protests against police violence.
In July 2020, public records requests we sent to the semi-public Union Square Business Improvement District (USBID) in San Francisco revealed that the USBID collaborated with the San Francisco Police Department (SFPD) to spy on protesters. Specifically, they gave the SFPD a large “data dump” of footage (USBID’s phrase). They also granted police live access to their cameras for a week in order to surveil protests.
In February 2021, public records we obtained from the Los Angeles Police Department (LAPD) revealed that LAPD detectives had requested footage of protests from residents’ Ring surveillance doorbell cameras. The requests, from detective squads allegedly investigating illegal activity in proximity to the First Amendment-protected protest, sought an undisclosed number of hours of footage. The LAPD’s use of Ring doorbell cameras for political surveillance, and the SFPD’s use of USBID cameras for the same purpose, demonstrate how police are increasingly reliant on non-city and privately-owned, highly-networked security cameras, thus blurring the lines between private and public surveillance.Enforcing Legal Limits on Police Spying
In October 2020, EFF and the ACLU of Northern California filed a lawsuit against the City of San Francisco regarding its illegal video surveillance of protestors against police violence and racism, revealed through our public records requests discussed above. SFPD's real-time monitoring of dissidents violated the City's Surveillance Technology Ordinance, enacted in 2019, which bars city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, unless they first obtain approval from the Board of Supervisors following a public process with ample opportunity for community members to make their voices heard.
The lawsuit was filed on behalf of three activists of color who participated in and organized protests against police violence in May and June of 2020. They seek a court order requiring San Francisco and its police to stop using surveillance technologies in violation of the Ordinance.Helping Communities Say “No” to Surveillance Technology
Around the country, EFF is working with local activists to ban government use of face recognition technology—a particularly pernicious form of biometric surveillance. Since 2019, when San Francisco became the first city to adopt such a ban, more than a dozen communities across the country have followed San Francisco's lead. In each city, residents stood up to say “no,” and their elected representatives answered that call. In the weeks and months following the nationwide protests against police violence, we continued to work closely with our fellow Electronic Frontier Alliance members, local ACLU chapters, and other dedicated organizers to support new bans on government face surveillance across the United States, including in Boston, MA, Portland, OR, Minneapolis, MN, and Kings County, WA.
Last year’s protests for police accountability made a big difference in New York City, where we actively supported the work of local advocates for three years to pass a surveillance transparency ordinance. That City's long overdue POST Act was passed as part of a three-bill package that many had considered longshots before the protests. However, amid calls to defund the police, many of the bill's detractors, including New York City Mayor Bill de Blasio, came to see the measure as appropriate and balanced.
EFF also aided our allies in St. Louis and Baltimore, who put the brakes on a panopticon-like aerial surveillance system, developed by a vendor ominously named Persistent Surveillance Systems. The spy plane program first invaded the privacy of Baltimore residents in the wake of the in-custody killing of Freddy Gray by police. EFF submitted a friend-of-the-court brief in a federal civil rights lawsuit, filed by ACLU, challenging Baltimore's aerial surveillance program. We were joined by the Brennan Center for Justice, the Electronic Privacy Information Center, FreedomWorks, the National Association of Criminal Defense Lawyers, and the Rutherford Institute. In St. Louis, EFF and local advocates—including the ACLU of Missouri and Electronic Frontier Alliance member Privacy Watch STL—worked to educate lawmakers and their constituents about the dangers and unconstitutionality of a bill that would have forced the City to enter into a contract to replicate the Baltimore spying program over St. Louis.
Protesters compelled companies around the country to reconcile their relationship to a deadly system of policing with their press releases in support of Black lives. Some companies heeded the calls from activists to stop their sale of face recognition technology to police departments. In June 2020, IBM, Microsoft, and Amazon paused these sales. Amazon said its pause would continue until such time as the government could "place stronger regulations to govern the ethical use of facial recognition."
This was, in many ways, an admission of guilt: companies recognized how harmful face recognition is in the hands of police departments. One year later, the regulatory landscape at the federal level has hardly moved. Following increased pressure by a coalition of civil rights and racial justice organizations, Amazon recently announced it was indefinitely extending its moratorium on selling Rekognition, its face recognition product, to police.
These are significant victories for activists, but the fight is not over. With companies like Clearview AI continuing to sell their face surveillance products to police, we still need a federal ban on government use of face recognition.The Fight Is Far From Over
Throughout the last year of historic protests for Black lives, it has been more apparent than ever that longstanding EFF concerns, such as law enforcement surveillance and freedom of expression, are part of our nation’s long-needed reckoning with racial injustice.
EFF will continue to stand with our neighbors, communities mourning the victims of police homicide, and the Black-led movement against police violence. We stand with the protesters demanding true and lasting justice. We stand with the journalists facing arrest and other forms of violence for exposing these atrocities. And we will stand with all those using their cameras, phones, and other digital tools to lift up the voices of the survivors, those we’ve lost, and all who demand a truly safe and just future.Related Cases: Williams v. San Francisco
Civil Society Groups Seek More Time to Review, Comment on Rushed Global Treaty for Intrusive Cross Border Police Powers
Electronic Frontier Foundation (EFF), European Digital Rights (EDRi), and 40 other civil society organizations urged the Council of Europe’s Parliament Assembly and Committee of Ministers to allow more time for them to provide much-needed analysis and feedback on the flawed cross border police surveillance treaty its cybercrime committee rushed to approve without adequate privacy safeguards.
Digital and human rights groups were largely sidelined during the drafting process of the Second Additional Protocol to the Budapest Convention, an international treaty that will establish global procedures for law enforcement in one country to access personal user data from technology companies in other countries. The CoE Cybercrime Committee (T-CY)—which oversees the Budapest Convention—adopted in 2017, as work on the police powers treaty began, internal rules that fostered a narrower range of participants for the drafting of this new Protocol.
The process has been largely opaque, led by public safety and law enforcement officials. And T-CY’s periodic consultations with civil society and the public have been criticized for their lack of detail, their short response timelines, and the lack of knowledge about countries' deliberation on these issues. The T-CY rushed approval of the text on May 28th, signing off on provisions that put few limitations and provide little oversight on police access to sensitive user data held by Internet companies around the world.
The Protocol now heads to the Council of Europe Parliamentary Assembly (PACE) Committee on Legal Affairs and Human Rights, which can recommend further amendments. We hope the PACE will hear civil society’s privacy concerns and issue an opinion addressing the lack of adequate data protection safeguards.
In a letter, dated March 31st, to PACE President Rik Daems and Chair of the Committee of Ministers Péter Szijjártó, digital and human rights groups said the treaty will likely be used extensively, with far-reaching implications on the security and privacy of people everywhere. It is imperative that fundamental rights guaranteed in the European Convention on Human Rights and in other agreements are not sidestepped in favor of law enforcement access to user data that is free of judicial oversight and strong privacy protections. The CoE’s plan is to finalize the Protocol's adoption by November and begin accepting signatures from countries sometime before 2022.
We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement,” EFF and its allies said in the letter. “The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments.”
In 2018 EFF, along with 93 civil society organizations from across the globe, asked the TC-Y to invite civil society as experts in the drafting plenary meetings, as is customary in other Council of Europe Committee sessions. The goal was for the experts to listen to Member States opinions and build on those discussions. But we could not work towards this goal since we were not invited to observe the drafting process. While EFF has participated in every public consultation of the TC-Y process since our 2018 coalition letter, the level of participation allowed has failed to comply with meaningful multi-stakeholder principles of transparency, inclusion and accountability. As Tamir Israel (CIPPIC) and Katitza Rodriguez (EFF) pointed out in their analysis of the Protocol:
With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.
The full text of the letter:
Re: Ensuring Meaningful Consultation in Cybercrime Negotiations
We, the undersigned individuals and organizations, write to ask for a meaningful opportunity to give the final draft text of the proposed second additional protocol to Convention 185, the Budapest Cybercrime Convention, the full and detailed consideration which it deserves. We specifically ask that you provide external stakeholders further opportunity to comment on the significant changes introduced to the text on the eve of the final consultation round ending on 6th May, 2021.
The Second Additional Protocol aims to standardise cross-border access by law enforcement authorities to electronic personal data. While competing initiatives are also underway at the United Nations and the OECD, the draft Protocol has the potential to become the global standard for such cross-border access, not least because of the large number of states which have already ratified the principal Convention. In these circumstances, it is imperative that the Protocol should lay down adequate standards for the protection of fundamental rights.
Furthermore, the initiative comes at a time when even routine criminal investigations increasingly include cross-border investigative elements and, in consequence, the protocol is likely to be used widely. The protocol therefore assumes great significance in setting international standards, and is likely to be used extensively, with far-reaching implications for privacy and human rights around the world. It is important that its terms are carefully considered and ensure a proportionate balance between the objective of securing or recovering data for the purposes of law enforcement and the protection of fundamental rights guaranteed in the European Convention on Human Rights and in other relevant national and international instruments.
In light of the importance of this initiative, many of us have been following this process closely and have participated actively, including at the Octopus Conference in Strasbourg in November, 2019 and the most recent and final consultation round which ended on 6th May, 2021.
Although many of us were able to engage meaningfully with the text as it stood in past consultation rounds, it is significant that these earlier iterations of the text were incomplete and lacked provisions to protect the privacy of personal data. In the event, the complete text of the draft Protocol was not publicly available before 12th April, 2021. The complete draft text introduces a number of significant alterations, most notably the inclusion of Article 14, which added for the first time proposed minimum standards for privacy and data protection. While external stakeholders were previously notified that these provisions were under active consideration and would be published in due course, the publication of the revised draft on 12th April offered the first opportunity to examine these provisions and consider other elements of the Protocol in the full light of these promised protections.
We were particularly pleased to see the addition of Article 14, and welcome its important underlying intent—to balance law enforcement objectives with fundamental rights. However, the manner in which this is done is, of necessity, complex and intricate, and, even on a cursory preliminary examination, it is apparent that there are elements of the article which require careful and thoughtful scrutiny, in the light of which might be capable of improvement.
As a number of stakeholders has noted, the latest (and final) consultation window was too short. It is essential that adequate time is afforded to allow a meaningful analysis of this provision and that all interested parties be given a proper chance to comment. We believe that such continued engagement can serve only to improve the text.
The introduction of Article 14 is particularly detailed and transformative in its impact on the entirety of the draft Protocol. Keeping in mind the multiple national systems potentially impacted by the draft Protocol, providing meaningful feedback on this long anticipated set of safeguards within the comment window has proven extremely difficult for civil society groups, data protection authorities and a wide range of other concerned experts.
Complicating our analysis further are gaps in the Explanatory Report accompanying the draft Protocol. We acknowledge that the Explanatory Report might continue to evolve, even after the Protocol itself is finalised, but the absence of elaboration on a pivotal provision such as Article 14 poses challenges to our understanding of its implications and our resulting ability meaningfully to engage in this important treaty process.
We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement. The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments. Misalignments between Article 14 and existing legal frameworks on data protection such as Convention 108/108+ similarly demand careful scrutiny so that their implications are fully understood.
In these circumstances, we anticipate that the Council will wish to accord the highest priority to ensuring that fundamental rights are adequately safeguarded and that the consultation process is sufficiently robust to instill public confidence in the Protocol across the myriad jurisdictions which are to consider its adoption. The Council will, of course, appreciate that these objectives cannot be achieved without meaningful stakeholder input.
We are anxious to assist the Council in this process. In that regard, constructive stakeholder engagement requires a proper opportunity fully to assess the draft protocol in its entirety, including the many and extensive changes introduced in April 2021. We anticipate that the Council will share this concern, and to that end we respectfully suggest that the proposed text (inclusive of a completed explanatory report) be widely disseminated and that a minimum period of 45 days be set aside for interested stakeholders to submit comments.
We do realise that the T-CY Committee had hoped for an imminent conclusion to the drafting process. That said, adding a few months to a treaty process that has already spanned several years of internal drafting is both necessary and proportionate, particularly when the benefits of doing so will include improved public accountability and legitimacy, a more effective framework for balancing law enforcement objectives with fundamental rights, and a finalised text that reflects the considered input of civil society.
We very much look forward to continuing our engagement with the Council both on this and on future matters.
With best regards,
- Electronic Frontier Foundation (international)
- European Digital Rights (European Union)
- The Council of Bars and Law Societies of Europe (CCBE) (European Union)
- Access Now (International)
- ARTICLE19 (Global)
- ARTICLE19 Brazil and South America
- Association for Progressive Communications (APC)
- Association of Technology, Education, Development, Research and Communication - TEDIC (Paraguay)
- Asociación Colombiana de Usuarios de Internet (Colombia)
- Asociación por los Derechos Civiles (ADC) (Argentina)
- British Columbia Civil Liberties Association (Canada)
- Chaos Computer Club e.V. (Germany)
- Content Development & Intellectual Property (CODE-IP) Trust (Kenya)
- net (Sweden)
- Derechos Digitales (Latinoamérica)
- Digitale Gesellschaft (Germany)
- Digital Rights Ireland (Ireland)
- Danilo Doneda, Director of Cedis/IDP and member of the National Council for Data Protection and Privacy (Brazil)
- Electronic Frontier Finland (Finland)
- works (Austria)
- Fundación Acceso (Centroamérica)
- Fundacion Karisma (Colombia)
- Fundación Huaira (Ecuador)
- Fundación InternetBolivia.org (Bolivia)
- Hiperderecho (Peru)
- Homo Digitalis (Greece)
- Human Rights Watch (international)
- Instituto Panameño de Derecho y Nuevas Tecnologías - IPANDETEC (Central America)
- Instituto Beta: Internet e Democracia - IBIDEM (Brazil)
- Institute for Technology and Society - ITS Rio (Brazil)
- International Civil Liberties Monitoring Group (ICLMG)
- Iuridicium Remedium z.s. (Czech Republic)
- IT-Pol Denmark (Denmark)
- Douwe Korff, Emeritus Professor of International Law, London Metropolitan University
- Laboratório de Políticas Públicas e Internet - LAPIN (Brazil)
- Laura Schertel Mendes, Professor, Brasilia University and Director of Cedis/IDP (Brazil)
- Open Net Korea (Korea)
- OpenMedia (Canada)
- Privacy International (international)
- R3D: Red en Defensa de los Derechos Digitales (México)
- Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic - CIPPIC (Canada)
- Usuarios Digitales (Ecuador)
- org (Netherlands)
- Xnet (Spain)
 See, for example Access Now, comments on the draft 2nd Additional Protocol to the Budapest Convention on Cybercrime, available at: https://rm.coe.int/0900001680a25783; EDPB, contribution to the 6th round of consultations on the draft Second Additional Protocol to the Council of Europe Budapest Convention on Cybercrime, available at: https://edpb.europa.eu/system/files/2021-05/edpb_contribution052021_6throundconsultations_budapestconvention_en.pdf.
 Alessandra Pierucci, Correspondence to Ms. Chloé Berthélémy, dated 17 May 2021; Consultative Committee of the Convention for the Protection of Individuals with Regard to Automated Processing of Personal Data, Directorate General Human Rights and Rule of Law, Opinion on Draft Second Additional Protocol, May 7, 2021, https://rm.coe.int/opinion-of-the-committee-of-convention-108-on-the-draft-second-additio/1680a26489; EDPB, see footnote 1; Joint Civil Society letter, 2 May: available at https://edri.org/wp-content/uploads/2021/05/20210420_LetterCoECyberCrimeProtocol_6thRound.pdf.
It took two and a half years and one national security incident, but Venmo did it, folks: users now have privacy settings to hide their friends lists.
EFF first pointed out the problem with Venmo friends lists in early 2019 with our "Fix It Already" campaign. While Venmo offered a setting to make your payments and transactions private, there was no option to hide your friends list. No matter how many settings you tinkered with, Venmo would show your full friends list to anyone else with a Venmo account. That meant an effectively public record of the people you exchange money with regularly, along with whoever the app might have automatically imported from your phone contact list or even your Facebook friends list. The only way to make a friends list “private” was to manually delete friends one at a time; turn off auto-syncing; and, when the app wouldn’t even let users do that, monitor for auto-populated friends and remove them one by one, too.
This public-no-matter-what friends list design was a privacy disaster waiting to happen, and it happened to the President of the United States. Using the app’s search tool and all those public friends lists, Buzzfeed News found President Biden’s account in less than 10 minutes, as well as those of members of the Biden family, senior staffers, and members of Congress. This appears to have been the last straw for Venmo: after more than two years of effectively ignoring calls from EFF, Mozilla, and others, the company has finally started to roll out privacy settings for friends lists.
As we’ve noted before, this is the bare minimum. Providing more privacy settings options so users can opt-out of the publication of their friends list is a step in the right direction. But what Venmo—and any other payment app—must do next is make privacy the default for transactions and friends lists, not just an option buried in the settings.
In the meantime, follow these steps to lock down your Venmo account:
- Tap the three lines in the top right corner of your home screen and select Settings near the bottom. From the settings screen, select Privacy and then Friends List. (If the Friends List option does not appear, try updating your app, restarting it, or restarting your phone.
- The settings will look like this by default.
- Change the privacy setting to Private. If you do not wish to appear in your friends’ own friends lists—after all, they may not set theirs to private—click the toggle off at the bottom. The final result should look like this.
- Back on the Privacy settings page, make sure your Default privacy settings look like this: set your default privacy option for all future payments to Private.
- Now select Past Transactions.
- Select Change All to Private.
- Confirm the change and click Change to Private.
- Now go all the way back to the main settings page, and select Friends & social.
- From here, you may see options to unlink your Venmo account from your Facebook account, Facebook friends list, and phone contact list. (Venmo may not give you all of these options if, for example, you originally signed up for Venmo with your Facebook account.) Click all the toggles off if possible.
Obviously your specific privacy preferences are up to you, but following the steps above should protect you from the most egregious snafus that the company has caused over the years with its public-by-default—or entirely missing— privacy settings. Although it shouldn't take a national security risk to force a company to focus on privacy, we're glad that Venmo has finally, at last, years later, provided friends list privacy options
Amazon Ring has announced that it will change the way police can request footage from millions of doorbell cameras in communities across the country. Rather than the current system, in which police can send automatic bulk email requests to individual Ring users in an area of interest up to a square half mile, police will now publicly post their requests to Ring’s accompanying Neighbors app. Users of that app will see a “Request for Assistance” on their feed, unless they opt out of seeing such requests, and then Ring customers in the area of interest (still up to a square half mile) can respond by reviewing and providing their footage.
Because only a portion of Ring users also are Neighbors users, and some of them may opt out of receiving police requests, this new system may reduce the number of people who receive police requests, though we wonder whether Ring will now push more of its users to register for the app.
This new model also may increase transparency over how police officers use and abuse the Ring system, especially as to people of color, immigrants, and protesters. Previously, in order to learn about police requests to Ring users, investigative reporters and civil liberties groups had to file public records requests with police departments--which consumed significant time and often yielded little information from recalcitrant agencies. Through this labor-intensive process, EFF revealed that the Los Angeles Police Department targeted Black Lives Matter protests in May and June 2020 with bulk Ring requests for doorbell camera footage that likely included First Amendment protected activities. Now, users will be able to see every digital request a police department has made to residents for Ring footage by scrolling through a department’s public page on the app.
But making it easier to monitor historical requests can only do so much. It certainly does not address the larger problem with Ring and Neighbors: the network is predicated on perpetuating irrational fear of neighborhood crime, often yielding disproportionate scrutiny against people of color, all for the purposes of selling more cameras. Ring does so through police partnerships, which now encompass 1 in every 10 police departments in the United States. At their core, these partnerships facilitate bulk requests from police officers to Ring customers for their camera footage, built on a growing Ring surveillance network of millions of public-facing cameras. EFF adamantly opposes these Ring-police partnerships and advocates for their dissolution.
Nor does new transparency about bulk officer-to-resident requests through Ring erase the long history of secrecy about these shady partnerships. For example, Amazon has provided free Ring cameras to police, and limited what police were allowed to say about Ring, even including the existence of the partnership.
Notably, Amazon has moved Ring functionality to its Neighbors app. Neighbors is a problematic technology. Like its peers Nextdoor and Citizen, it encourages its users to report supposedly suspicious people--often resulting in racially biased posts that endanger innocent residents and passersby.
Ring’s small reforms invite bigger questions: Why does a customer-focused technology company need to develop and maintain a feature for law enforcement in the first place? Why must Ring and other technology companies continue to offer police free features to facilitate surveillance and the transfer of information from users to the government?
Here’s some free advice for Ring: Want to make your product less harmful to vulnerable populations? Stop facilitating their surveillance and harassment at the hands of police.
Maryland and Montana Pass the Nation’s First Laws Restricting Law Enforcement Access to Genetic Genealogy Databases
Last week, Maryland and Montana passed laws requiring judicial authorization to search consumer DNA databases in criminal investigations. These are welcome and important restrictions on forensic genetic genealogy searching (FGGS)—a law enforcement technique that has become increasingly common and impacts the genetic privacy of millions of Americans.
Consumer personal genetics companies like Ancestry, 23andMe, GEDMatch, and FamilyTreeDNA host the DNA data of millions of Americans. The data users share with consumer DNA databases is extensive and revealing. The genetic profiles stored in those databases are made up of more than half a million single nucleotide polymorphisms (“SNPs”) that span the entirety of the human genome. These profiles not only can reveal family members and distant ancestors, they can divulge a person’s propensity for various diseases like breast cancer or Alzheimer’s and can even predict addiction and drug response. Some researchers have even claimed that human behaviors such as aggression can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply. Companies can even create images of what they think a person looks like based just on their genetic data.
Law enforcement regularly accesses this intensely private and sensitive data too, using FGGS. Just like consumers, officers take advantage of the genetics companies’ powerful algorithms to try to identify familial relationships between an unknown forensic sample and existing site users. These familial relationships can then lead law enforcement to possible suspects. However, in using FGGS, officers are rifling through the genetic data of millions of Americans who are not suspects in the investigation and have no connection to the crime whatsoever. This is not how criminal investigations are supposed to work. As we have argued before, the language of the Fourth Amendment, which requires probable cause for every search and particularity for every warrant, precludes dragnet warrantless searches like these. A technique’s usefulness for law enforcement does not outweigh people’s privacy interests in their genetic data.
Up until now, nothing has prevented law enforcement from rifling through the genetic data of millions of unsuspecting and innocent Americans. The new laws in Maryland and Montana should change that.Here’s What the New Laws Require: Maryland:
Maryland’s law is very broad and covers much more than FGGS. It requires judicial authorization for FGGS and places strict limits on when and under what conditions law enforcement officers may conduct FGGS. For example, FGGS may only be used in cases of rape, murder, felony sexual offenses, and criminal acts that present “a substantial and ongoing threat to public safety or national security.” Before officers can pursue FGGS, they must certify to the court that they have already tried searching existing, state-run criminal DNA databases like CODIS, that they have pursued other reasonable investigative leads, and that those searches have failed to identify anyone. And FGGS may only be used with consumer databases that have provided explicit notice to users about law enforcement searches and sought consent from those users. These meaningful restrictions ensure that FGGS does not become the default first search conducted by law enforcement and limits its use to crimes that society has already determined are the most serious.
The Maryland law regulates other important aspects of genetic investigations as well. For example, it places strict limits on and requires judicial oversight for the covert collection of DNA samples from both potential suspects and their genetic relatives, something we have challenged several times in the courts. This is a necessary protection because officers frequently and secretly collect and search DNA from free people in criminal investigations involving FGGS. We cannot avoid shedding carbon copies of our DNA, and we leave it behind on items in our trash, an envelope we lick to seal, or even the chairs we sit on, making it easy for law enforcement to collect our DNA without our knowledge. We have argued that the Fourth Amendment precludes covert collection, but until courts have a chance to address this issue, statutory protections are an important way to reinforce our constitutional rights.
The new Maryland law also mandates informed consent in writing before officers can collect DNA samples from third parties and precludes covert collection from someone who has refused to provide a sample. It requires destruction of DNA samples and data when an investigation ends. It also requires licensing for labs that conduct DNA sequencing used for FGGS and for individuals who perform genetic genealogy. It creates criminal penalties for violating the statute and a private right of action with liquidated damages so that people can enforce the law through the courts. It requires the governor’s office to report annually and publicly on law enforcement use of FGGS and covert collection. Finally, it states explicitly that criminal defendants may use the technique as well to support their defense (but places similar restrictions on use). All of these requirements will help to rein in the unregulated use of FGGS.Montana:
In contrast to Maryland’s 16-page comprehensive statute, Montana’s is only two pages and less clearly drafted. However, it still offers important protections for people identified through FGGS.
Montana’s statute requires a warrant before government entities can use familial DNA or partial match search techniques on either consumer DNA databases or the state’s criminal DNA identification index. 1 The statute defines a “familial DNA search” broadly as a search that uses “specialized software to detect and statistically rank a list of potential candidates in the DNA database who may be a close biological relative to the unknown individual contributing the evidence DNA profile.” This is exactly what consumer genetic genealogy sites like GEDmatch and FamilyTree DNA’s software does. The statute also applies to companies like Ancestry and 23andMe that do their own genotyping in-house, because it covers “lineage testing,” which it defines as “[SNP] genotyping to generate results related to a person's ancestry and genetic predisposition to health-related topics.”
The statute also requires a warrant for other kinds of searches of consumer DNA databases, like when law enforcement is looking for a direct user of the consumer DNA database. Unfortunately, though, the statute includes a carve-out to this warrant requirement if “the consumer whose information is sought previously waived the consumer’s right to privacy,” but does not explain how an individual consumer may waive their privacy rights. There is no carve out for familial searches.
By creating stronger protections for people who are identified through familial searches but who haven’t uploaded their own data, Montana’s statute recognizes an important point that we and others have been making for a few years—you cannot waive your privacy rights in your genetic information when someone else has control over whether your shared DNA ends up in a consumer database.
It is unfortunate, though, that this seems to come at the expense of existing users of consumer genetics services. Montana should have extended warrant protections to everyone whose DNA data ends up in a consumer DNA database. A bright line rule would have been better for privacy and perhaps easier for law enforcement to implement since it is unclear how law enforcement will determine whether someone waived their privacy rights in advance of a search.
We need more states—and the federal government— to pass restrictions on genetic genealogy searches. Some companies, like Ancestry and 23andMe prevent direct access to their databases and have fought law enforcement demands for data. However, other companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country. By 2018, FGGS had already been used in at least 200 cases. Officers never sought a warrant or any legal process at all in any of those cases because there were no state or federal laws explicitly requiring them to do so.
While EFF has argued FGG searches are dragnets and should never be allowed—even with a warrant, Montana and Maryland’s laws are still a step in the right direction, especially where, as in Maryland, an outright ban previously failed. Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it.
- 1. The restriction on warrantless familial and partial match searching of government-run criminal DNA databases is particularly welcome. Most states do not explicitly limit these searches (Maryland is an exception and explicitly bans this practice), even though many, including a federal government working group, have questioned their efficacy.
Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+, they can be harmed by oppressive governments, bigoted law enforcement, and individuals with hateful beliefs. So here’s some advice for staying safe while online dating as an LGBTQIA+ person:Step One: Threat Modeling
The first step is making a personal digital security plan. You should start with looking at your own security situation from a high level. This is often called threat modeling and risk assessment. Simply put, this is taking inventory of the things you want to protect and what adversaries or risks you might be facing. In the context of online dating, your protected assets might include details about your sexuality, gender identity, contacts of friends and family, HIV status, political affiliation, etc.
Let's say that you want to join a dating app, chat over the app, exchange pictures, meet someone safely, and avoid stalking and harassment. Threat modeling is how you assess what you want to protect and from whom.
We touch in this post on a few considerations for people in countries where homosexuality is criminalized, which may include targeted harassment by law enforcement. But this guide is by no means comprehensive. Refer to materials by LGBTQ+ organizations in those countries for specific tips on your threat model.Securely Setting Up Dating Profiles
When making a new dating account, make sure to use a unique email address to register. Often you will need to verify the registration process through that email account, so it’s likely you’ll need to provide a legitimate address. Consider creating an email address strictly for that dating app. Oftentimes there are ways to discover if an email address is associated with an account on a given platform, so using a unique one can prevent others from potentially knowing you’re on that app. Alternatively, you might use a disposable temporary email address service. But if you do so, keep in mind that you won’t be able to access it in the future, such as if you need to recover a locked account.
The same logic applies to using phone numbers when registering for a new dating account. Consider using a temporary or disposable phone number. While this can be more difficult than using your regular phone number, there are plenty of free and paid virtual telephone services available that offer secondary phone numbers. For example, Google Voice is a service that offers a secondary phone number attached to your normal one, registered through a Google account. If your higher security priority is to abstain from giving data to a privacy-invasive company like Google, a “burner” pay-as-you-go phone service like Mint Mobile is worth checking out.
When choosing profile photos, be mindful of images that might accidentally give away your location or identity. Even the smallest clues in an image can expose its location. Some people use pictures with relatively empty backgrounds, or taken in places they don’t go to regularly.
Make sure to check out the privacy and security sections in your profile settings menu. You can usually configure how others can find you, whether you’re visible to others, whether location services are on (that is, when an app is able to track your location through your phone), and more. Turn off anything that gives away your location or other information, and later you can selectively decide which features to reactivate, if any. More mobile phone privacy information can be found on this Surveillance Self Defense guide.Communicating via Phone, Email, or In-App Messaging
Generally speaking, using an end-to-end encrypted messaging service is the best way to go for secure texting. For some options like Signal, or Whatsapp, you may be able to use a secondary phone number to keep your “real” phone number private.
For phone calls, you may want to use a virtual phone service that allows you to screen calls, use secondary phone numbers, block numbers, and more. These aren’t always free, but research can bring up “freemium” versions that give you free access to limited features.
Be wary of messaging features within apps that offer deletion options or disappearing messages, like Snapchat. Many images and messages sent through these apps are never truly deleted, and may still exist on the company’s servers. And even if you send someone a message that self-deletes or notifies you if they take a screenshot, that person can still take a picture of it with another device, bypassing any notifications. Also, Snapchat has a map feature that shows live public posts around the world as they go up. With diligence, someone could determine your location by tracing any public posts you make through this feature.Sharing Photos
If the person you’re chatting with has earned a bit of your trust and you want to share pictures with them, consider not just what they can see about you in the image itself, as described above, but also what they can learn about you by examining data embedded in the file.
EXIF metadata lives inside an image file and describes the geolocation it was taken, the device it was made with, the date, and more. Although some apps have gotten better at automatically withholding EXIF data from uploaded images, you still should manually remove it from any images you share with others, especially if you send them directly over phone messaging.
One quick way is to send the image to yourself on Signal messenger, which automatically strips EXIF data. When you search for your own name in contacts, a feature will come up with “Note to Self” where you have a chat screen to send things to yourself:
For some people, it might be valuable to use a watermarking app to add your username or some kind of signature to images. This can verify who you are to others and prevent anyone from using your images to impersonate you. There are many free and mostly-free options in iPhone and Android app stores. Consider a lightweight version that allows you to easily place text on an image and lets you screenshot the result. Keep in mind that watermarking a picture is a quick way to identify yourself, which in itself is a trade-off.Sexting Safely
Much of what we’ve already gone over will step up your security when it comes to sexting, but here are some extra precautions:
Seek clearly communicated consent between you and romantic partners about how intimate pictures can be shared or saved. This is great non-technical security at work. If anyone else is in an image you want to share, make sure you have their consent as well. Also, be thoughtful as to whether or not to include your face in any images you share.
As we mentioned above, your location can be determined by public posts you make and Snapchat’s map application.
For video chatting with a partner, consider a service like Jitsi that allows temporary rooms, no registration, and is designed with privacy in mind. Many services are not built with privacy in mind, and require account registration, for example.Meeting Someone AFK
Say you’ve taken all the above precautions, someone online has gained your trust, and you want to meet them away-from-keyboard and in real life. Always meet first somewhere public and occupied with other people. Even better, meet in an area more likely to be accepting of LGBTQIA+ people. Tell a friend beforehand all the details about where you’re going, who you are meeting, and a given time that you promise to check back in with them that you’re ok.
If you’re living in one of the 69 countries where homosexuality is illegal and criminalized, make sure to check in with local advocacy groups about your area. Knowing your rights as a citizen will help keep you safe if you’re stopped by law enforcement.Privacy and Security is a Group Effort
Although the world is often hostile to non-normative expressions of love and identity, your personal security, online and off, is much better supported when you include the help of others that you trust. Keeping each other safe, accountable, and cared for gets easier when you have more people involved. A network is always stronger when every node on it is fortified against potential threats.
Happy Pride Month—keep each other safe.
The Council of Europe Cybercrime Committee's (T-CY) recent decision to approve new international rules for law enforcement access to user data without strong privacy protections is a blow for global human rights in the digital age. The final version of the draft Second Additional Protocol to the Council of Europe’s (CoE) widely adopted Budapest Cybercrime Convention, approved by the T-CY drafting committee on May 28th, places few limits on law enforcement data collection. As such, the Protocol can endanger technology users, journalists, activists, and vulnerable populations in countries with flimsy privacy protections and weaken everyone's right to privacy and free expression across the globe.
The Protocol now heads to members of CoE's Parliamentary Committee (PACE) for their opinion. PACE’s Committee on Legal Affairs and Human Rights can recommend further amendments, and decide which ones will be adopted by the Standing Committee or the Plenary. Then, the Council of Ministers will vote on whether to integrate PACE's recommendations into the final text. The CoE’s plan is to finalize the Protocol's adoption by November. If adopted, the Protocol will be open for signatures to any country that has signed the Budapest Convention sometime before 2022.
The next step for countries is at the signature stage when they will ask to reserve the right not to abide by certain provisions n the Protocol, especially Article 7 on direct cooperation between law enforcement and companies holding user data.
If countries sign the Protocol as it stands and in its entirety, it will reshape how state police access digital data from Internet companies based in other countries by prioritizing law enforcement demands, sidestepping judicial oversight, and lowering the bar for privacy safeguards.CoE’s Historical Commitment to Transparency Conspicuously Absent
While transparency and a strong commitment to engaging with external stakeholders have been hallmarks of CoE treaty development, the new Protocol’s drafting process lacked robust engagement with civil society. The T-CY adopted internal rules that have fostered a largely opaque process, led by public safety and law enforcement officials. T-CY’s periodic consultations with external stakeholders and the public have lacked important details, offered short response timelines, and failed to meaningfully address criticisms.
In 2018, nearly 100 public interest groups called on the CoE to allow for expert civil society input on the Protocol’s development. In 2019, the European Data Protection Board (EDPB) similarly called on T-CY to ensure “early and more proactive involvement of data protection authorities” in the drafting process, a call it felt the need to reiterate earlier this year. And when presenting the Protocol’s draft text for final public comment, T-CY provided only 2.5 weeks, a timeframe that the EDPB noted “does not allow for a timely and in-depth analysis” from stakeholders. That version of the Protocol also failed to include the explanatory text for the data protection safeguards, which was only published later, in the final version of May 28, without public consultation. Even other branches of the CoE, such as its data protection committee, have found it difficult to provide meaningful input under these conditions.
Last week, over 40 civil society organizations called on CoE to provide an additional opportunity to comment on the final text of the Protocol. The Protocol aims to set a new global standard across countries with widely varying commitments to privacy and human rights. Meaningful input from external stakeholders including digital rights organizations and privacy regulators is essential. Unfortunately, CoE refused and will likely vote to open the Protocol for state signatures starting in November.
With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.Eroding Global Protection for Online Anonymity
The new Protocol provides few safeguards for online anonymity, posing a threat to the safety of activists, dissidents, journalists, and the free expression rights of everyday people who go online to comment on and criticize politicians and governments. When Internet companies turn subscriber information over to law enforcement, the real-world consequences can be dire. Anonymity also plays an important role in facilitating opinion and expression online and is necessary for activists and protestors around the world. Yet the new Protocol fails to acknowledge the important privacy interests it places in jeopardy and, by ensuring most of its safeguards are optional, permits police access to sensitive personal data without systematic judicial supervision.
As a starting point, the new Protocol’s explanatory text claims that: ”subscriber information … does not allow precise conclusions concerning the private lives and daily habits of individuals concerned,” deeming it less intrusive than other categories of data.
This characterization is directly at odds with growing recognition that police frequently use subscriber data access to identify deeply private anonymous communications and activity. Indeed, the Court of Justice of the European Union (CJEU) recently held that letting states associate subscriber data with anonymous digital activity can constitute a ‘serious’ interference with privacy. The Protocol’s attempt to paint identification capabilities as non-intrusive even conflicts with CoE’s own European Court of Human Rights (ECtHR). By encoding the opposite conclusion in an international protocol, the new explanatory text can deter future courts from properly recognizing the importance of online anonymity. As the ECtHR held doing so would, “deny the necessary protection to information which might reveal a good deal about the online activity of an individual, including sensitive details of his or her interests, beliefs and intimate lifestyle.”
Articles 7 and 8 of the Protocol in particular adopt intrusive police powers while requiring few safeguards. Under Article 7, states must clear all legal obstacles to “direct cooperation” between local companies and law enforcement. Any privacy laws that prevent Internet companies from voluntarily identifying customers to foreign police without a court order are incompatible with Article 7 and must be amended. “Direct cooperation” is intended to be the primary means of accessing subscriber data, but Article 8 provides a supplementary power to force disclosure from companies that refuse to cooperate. While Article 8 does not require judicial supervision of police, countries with strong privacy protections may continue relying on their own courts when forcing a local service provider to identify customers. Both Articles 7 and 8 also allow countries to screen and refuse any subscriber data demands that might threaten a state’s essential interests. But these screening mechanisms also remain optional, and refusals are to be “strictly limited,” with the need to protect private data invoked only in “exceptional cases.”
By leaving most privacy and human rights protections to each state’s discretion, Articles 7 and 8 permit access to sensitive identification data under conditions that the ECtHR described as “offer[ing] virtually no protection from arbitrary interference … and no safeguards against abuse by State officials.”
The Protocol’s drafters have resisted calls from civil society and privacy regulators to require some form of judicial supervision in Articles 7 and 8. Some police agencies object to reliance on the courts, arguing that judicial supervision leads to slower results. But systemic involvement of the courts is a critical safeguard when access to sensitive personal data is at stake. The Office of the Privacy Commissioner of Canada put it cogently: “Independent judicial oversight may take time, but it’s indispensable in the specific context of law enforcement investigations.” Incorporating judicial supervision as a minimum threshold for cross-border access is also feasible. Indeed, a majority of states in T-CY’s own survey require prior judicial authorization for at least some forms of subscriber data in their respective national laws.
At a minimum, the new Protocol text is flawed for its failure to recognize the deeply private nature of anonymous online activity and the serious threat posed to human rights when State officials are allowed open-ended access to identification data. Granting states this access makes the world less free and seriously threatens free expression. Article 7’s emphasis on non-judicial ‘cooperation’ between police and Internet companies poses a particularly insidious risk, and must not form part of the final adopted Convention.Imposing Optional Privacy Standards
Article 14, which was recently publicized for the first time, is intended to provide detailed safeguards for personal information. Many of these protections are important, imposing limits on the treatment of sensitive data, the retention of personal data, and the use of personal data in automated decision-making, particularly in countries without data protection laws. The detailed protections are complex, and civil society groups continue to unpack their full legal impact. That being said, some shortcomings are immediately evident.
Some of Article 14’s protections actively undermine privacy—for example, paragraph 14.2.a prohibits signatories from imposing any additional “generic data protection conditions” when limiting the use of personal data. Paragraph 14.1.d also strictly limits when a country’s data protection laws can prevent law enforcement-driven personal data transfers to another country.
More generally, and in stark contrast to the Protocol’s lawful access obligations, the detailed privacy safeguards encoded in Article 14 are not mandatory and can be ignored if countries have other arrangements in place (Article 14.1). States can rely on a wide variety of agreements to bypass the Article 14 protections. The OECD is currently negotiating an agreement that might systematically displace the Article 14 protections and, under the United States Clarifying Lawful Overseas Use of Data (CLOUD) Act, the U.S. executive branch can enter into “agreements” with other states to facilitate law enforcement transfers. Paragraph 14.1.c even contemplates informal agreements that are neither binding, nor even public, meaning that countries can secretly and systematically bypass the Article 14 safeguards. No real obligations are put in place to ensure these alternative arrangements provide an adequate or even sufficient level of privacy protection. States can therefore rely on the Protocol’s law enforcement powers while using side agreements to bypass its privacy protections, a particularly troubling development given the low data protection standards of many anticipated signatories.
The Article 14 protections are also problematic because they appear to fall short of the minimum data protection that the CJEU has required. The full list of protections in Article 14, for example, resembles that inserted by the European Commission into its ‘Privacy Shield’ agreement. Internet companies relied upon the Privacy Shield to facilitate economic transfers of personal data from the European Union (EU) to the United States until the CJEU invalidated the agreement in 2020, finding its privacy protections and remedies insufficient. Similarly, clause 14.6 limits the use of personal data in purely automated decision-making systems that will have significant adverse effects on relevant individual interests. But the CJEU has also found that an international agreement for transferring air passenger data to Canada for public safety objectives was inconsistent with EU data protection guarantees despite the inclusion of a similar provision.Conclusion
These and other substantive problems with the Protocol are concerning. Cross-border data access is rapidly becoming common in even routine criminal investigations, as every aspect of our lives continues its steady migration to the digital world. Instead of baking robust human rights and privacy protections into cross-border investigations, the Protocol discourages court oversight, renders most of its safeguards optional, and generally weakens privacy and freedom of expression.
We are happy to see the news that Facebook is putting an end to a policy that has long privileged the speech of politicians over that of ordinary users. The policy change, which was announced on Friday by The Verge, is something that EFF has been pushing for since as early as 2019.
Back then, Facebook executive Nick Clegg, a former politician himself, famously pondered: "Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be."
Perhaps Clegg had a point—we’ve long said that companies are ineffective arbiters of what the world says—but that hardly justifies holding politicians to a lower standard than the average person. International standards will consider the speaker, but only as one of many factors. For example, the United Nations’ Rabat Plan of Action outlines a six-part threshold test that takes into account “(1) the social and political context, (2) status of the speaker, (3) intent to incite the audience against a target group, (4) content and form of the speech, (5) extent of its dissemination and (6) likelihood of harm, including imminence.” Facebook’s Oversight Board recently endorsed the Plan, as a framework for assessing the removal of posts that may incite hostility or violence.
Facebook has deviated very far from the Rabat standard thanks, in part, to the policy it is finally repudiating. For example, it has banned elected officials from parties disfavored by the U.S. government, such as Hezbollah, Hamas, and the Kurdistan Workers Party (PKK), all of which appear on the government's list of designated terrorist organizations—despite not being legally obligated to do so. And in 2018, the company deleted the account of Chechen leader Ramzan Kadyrov, claiming that they were legally obligated after the leader was placed on a sanctions list. Legal experts familiar with the law of international sanctions have disagreed, on the grounds that the sanctions are economic in nature and do not apply to speech.
So this decision is a good step in the right direction. But Facebook has many steps to go, including finally—and publicly—endorsing and implementing the Santa Clara Principles.
But ultimately, the real problem is that Facebook’s policy choices have so much power in the first place. It’s worth noting that this move coincides with a massive effort to persuade the U.S. Congress to impose new regulations that are likely to entrench Facebook power over free expression in the U.S. and around the world. If users, activists and, yes, politicians want real progress in defending free expression, we must fight for a world where changes in Facebook’s community standards don’t merit headlines at all—because they just don’t matter that much.
Our friends at Access Now are once again hosting RightsCon online next week, June 7-11th. This summit provides an opportunity for human rights experts, technologists, government representatives, and activists to discuss pressing human rights challenges and their potential solutions. This year we will have several EFF staff in attendance and leading sessions throughout this five-day conference.
We hope you have an opportunity to connect with us at the following:Monday, June 7th
7:45-8:45 am PDT – Taking stock of the Facebook Oversight Board’s first year
Director of International Freedom of Expression, Jillian York
This panel will take stock of the work of the Oversight Board since the announcement of its first members in May 2020. Panelists will critically reflect on the development of the Board over its first year and consider its future evolution.
9:00-10:00 am PDT– RightsCon: Upload filters for copyright enforcement take over the world: connecting struggles in EU, US, and Latin America
Associate Director of Policy and Activism, Katharine Trendacosta
The EU copyright directive risks making upload filters mandatory for large and small platforms to prevent copyright infringements by their users. Its adoption has led to large street protests over the risk that legal expression will be curtailed in the process. Although the new EU rules will only be applied from the summer, similar proposals have already emerged in the US and Latin America, citing the EU copyright directive as a role model.
11:30-12:30pm PDT– What's past is prologue: safeguarding rights in international efforts to fight cybercrime
Note: This strategy session is limited to 25 participants.
Policy Director for Global Privacy, Katitza Rodriguez
Some efforts by governments to fight cybercrime can fail to respect international human rights law and standards, undermining people’s fundamental rights. At the UN, negotiations on a new cybercrime treaty are set to begin later this year. This would be the first global treaty on cybercrime and has the potential to significantly shape government cooperation on cybercrime and respect for rights.Wednesday, June 9th
8:30-9:30 am PDT– Contemplating content moderation in Africa: disinformation and hate speech in focus
Director of International Freedom of Expression, Jillian York.
Note: This community lab is limited to 60 participants.
While social media platforms have been applauded for being a place of self-expression, they are often inattentive to local contexts in many ethnically and linguistically diverse African countries. This panel brings together academics and civil society members to highlight and discuss the obstacles facing Africa when it comes to content moderation.Thursday, June 10th
5:30-6:30 am PDT– Designing for language accessibility: making usable technologies for non-left to right languages
Designer and Education Manager, Shirin Mori.
Note: This community lab is limited to 60 participants.
The internet was built from the ground up for ASCII characters, leaving billions of speakers of languages that do not use the Latin alphabet underserved. We’d like to introduce a number of distinct problems for non-Left-to-Right (LTR) language users that are prevalent in existing security tools and workflows.
8:00-9:00 am PDT– “But we're on the same side!”: how tools to fight extremism can harm counterspeech
Director of International Freedom of Expression, Jillian York.
Bad actors coordinate across platforms to spread content that is linked to offline violence but not deemed TVEC by platforms; centralized responses have proved prone to error and too easy to propagate errors across the Internet. Can actual dangerous speech be addressed without encouraging such dangerous centralization?
12:30-1:30 pm PDT– As AR/VR becomes a reality, it needs a human rights framework
Policy Director for Global Privacy, Katitza Rodriguez; Grassroots Advocacy Organizer, Rory Mir; Deputy Executive Director and General Counsel, Kurt Opsahl.
Note: This community lab is limited to 60 participants.
Virtual reality and augmented reality technologies (VR/AR) are rapidly becoming more prevalent to a wider audience. This technology provides the promise to entertain and educate, to connect and enhance our lives, and even to help advocate for our rights. But it also raises the risk of eroding them online.Friday, June 11th
9:15-10:15 am PDT– Must-carry? The pros and cons of requiring online services to carry all user speech: litigation strategies and outcomes
Civil Liberties Director, David Greene.
Note: This community lab is limited to 60 participants.
The legal issue of whether online services must carry user speech is a complicated one, with free speech values on both sides, and different results arising from different national legal systems. Digital rights litigators from around the world will meet to map out the legal issue across various national legal systems and discuss ongoing efforts as well as how the issue may be decided doctrinally under our various legal regimes.
In addition to these events, the EFF staff will be attending many other events at RightsCon and we look forward to meeting you there. You can view the full programing, as well as many other useful resources, on the RightsCon site.
We all know that, in the 21st century, it is difficult to lead a life without a cell phone. It is also difficult to change your number—the one all your friends, family, doctors, children’s schools, and so on—have for you. It’s especially difficult to do these things if you are trying to leave an abusive situation where your abuser is in control of your family plan and therefore has access to your phone records. Thankfully, Congress has a bill that will change that.
In August 2020, EFF joined with the Clinic to End Tech Abuse and other groups dedicated to protecting survivors of domestic violence to send a letter to Congress, calling them to pass a federal law that creates the right to leave a family mobile phone plan that they share with their abuser.
This January, Senators Brian Schatz, Deb Fischer, Richard Blumenthal, Rick Scott, and Jacky Rosen responded to the letter by introducing The Safe Connections Act (S. 120), which would make it easier for survivors to separate their phone line from a family plan while keeping their own phone number. It would also require the FCC to create rules to protect the privacy of the people seeking this protection. EFF is supportive of this bill.
The bill got bipartisan support and passed unanimously out of the U.S. Senate Committee on Commerce, Science, & Transportation on April 28, 2021. While there is still a long way to go, EFF is pleased to see this bill get past the first critical step. There is little reason that telecommunications carriers, who are already required to make numbers portable when users want to change carriers, cannot replicate such a seamless process when a paying customer wants to move an account within the same carrier.
Our cell phones contain a vast amount of information about our lives, including the calls and texts we make and receive. The account holder of a family plan has access to all of that data, including if someone in the plan is calling a domestic violence hotline. Giving survivors more tools to protect their privacy, leave abusive situations, and get on with their lives are worthy endeavors. The Safe Connections Act provides a framework to serve these ends.
We would prefer a bill that did not require survivors to provide paperwork to “prove” their abuse—for many survivors, providing paperwork about their abuse from a third party is burdensome and traumatic, especially when it is required at the very moment when they are trying to free themselves from their abusers. It also requires the FCC to create new regulations to protect the privacy of people seeking help to leave abusive situations though still needs stronger safeguards and remedies to ensure these protections are effective. EFF will continue to advocate for these improvements as the legislation moves forward.
Van Buren is a Victory Against Overbroad Interpretations of the CFAA, and Protects Security Researchers
The Supreme Court’s Van Buren decision today overturned a dangerous precedent and clarified the notoriously ambiguous meaning of “exceeding authorized access” in the Computer Fraud and Abuse Act, the federal computer crime law that’s been misused to prosecute beneficial and important online activity.
The decision is a victory for all Internet users, as it affirmed that online services cannot use the CFAA’s criminal provisions to enforce limitations on how or why you use their service, including for purposes such as collecting evidence of discrimination or identifying security vulnerabilities. It also rejected the use of troubling physical-world analogies and legal theories to interpret the law, which in the past have resulted in some of its most dangerous abuses.
The Van Buren decision is especially good news for security researchers, whose work discovering security vulnerabilities is vital to the public interest but often requires accessing computers in ways that contravene terms of service. Under the Department of Justice’s reading of the law, the CFAA allowed criminal charges against individuals for any website terms of service violation. But a majority of the Supreme Court rejected the DOJ’s interpretation. And although the high court did not narrow the CFAA as much as EFF would have liked, leaving open the question of whether the law requires circumvention of a technological access barrier, it provided good language that should help protect researchers, investigative journalists, and others.
The CFAA makes it a crime to “intentionally access a computer without authorization or exceed authorized access, and thereby obtain . . . information from any protected computer,” but does not define what authorization means for purposes of exceeding authorized access. In Van Buren, a former Georgia police officer was accused of taking money in exchange for looking up a license plate in a law enforcement database. This was a database he was otherwise entitled to access, and Van Buren was charged with exceeding authorized access under the CFAA. The Eleventh Circuit analysis had turned on the computer owner’s unilateral policies regarding use of its networks, allowing private parties to make EULA, TOS, or other use policies criminally enforceable.
The Supreme Court rightly overturned the Eleventh Circuit, and held that exceeding authorized access under the CFAA does not encompass “violations of circumstance-based access restrictions on employers’ computers.” Rather, the statute’s prohibition is limited to someone who “accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.” The Court adopted a “gates-up-or-down” approach: either you are entitled to access the information or you are not. If you need to break through a digital gate to get in, entry is a crime, but if you are allowed through an open gateway, it’s not a crime to be inside.
This means that private parties’ terms of service limitations on how you can use information, or for what purposes you can access it, are not criminally enforced by the CFAA. For example, if you can look at housing ads as a user, it is not a hacking crime to pull them for your bias-in-housing research project, even if the TOS forbids it. Van Buren is really good news for port scanning, for example: so long as the computer is open to the public, you don’t have to worry about the conditions for use to scan the port.
While the decision was centered around the interpretation of the statute’s text, the Court bolstered its conclusion with the policy concerns raised by the amici, including a brief EFF filed on behalf of computer security researchers and organizations that employ and support them. The Court’s explanation is worth quoting in depth:
If the “exceeds authorized access” clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals. Take the workplace. Employers commonly state that computers and electronic devices can be used only for business purposes. So on the Government’s reading of the statute, an employee who sends a personal e-mail or reads the news using her work computer has violated the CFAA. Or consider the Internet. Many websites, services, and databases …. authorize a user’s access only upon his agreement to follow specified terms of service. If the “exceeds authorized access” clause encompasses violations of circumstance-based access restrictions on employers’ computers, it is difficult to see why it would not also encompass violations of such restrictions on website providers’ computers. And indeed, numerous amici explain why the Government’s reading [would] criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook.
This analysis shows the Court recognized the tremendous danger of an overly broad CFAA, and explicitly rejected the Government’s arguments for retaining wide powers, tempered only by their prosecutorial discretion.Left Unresolved: Whether CFAA Violations Require Technical Access Limitations
The Court’s decision was limited in one important respect. In a footnote, the Court left as an open question if the enforceable access restriction meant only “technological (or ‘code-based’) limitations on access, or instead also looks to limits contained in contracts or policies,” meaning that the opinion neither adopted nor rejected either path. EFF has argued in courts and legislative reform efforts for many years that it’s not a computer hacking crime without hacking through a technological defense.
This footnote is a bit odd, as the bulk of the majority opinion seems to point toward the law requiring someone to defeat technological limitations on access, and throwing shade at criminalizing TOS violations. In most cases, the scope of your access once on a computer is defined by technology, such as an access control list or a requirement to reenter a password. Professor Orin Kerr suggested that this may have been a necessary limitation to build the six justice majority.
Later in the Van Buren opinion, the Court rejected a Government argument that a rule against “using a confidential database for a non-law-enforcement purpose” should be treated as a criminally enforceable access restriction, different from “using information from the database for a non-law-enforcement purpose” (emphasis original). This makes sense under the “gates-up-or-down” approach adopted by the Court. Together with the policy issues the Court acknowledged regarding enforcing terms of service quoted above, this helps us understand the limitation footnote, suggesting cleverly writing a TOS will not easily turn a conditional rule on why you can access, or what you can do with information later, into a criminally enforceable access restriction.
Nevertheless, leaving the question open means that we will have to litigate whether and under what circumstance a contract or written policy can amount to an access restriction in the years to come. For example, in Facebook v. Power Ventures, the Ninth Circuit found that a cease and desist letter removing authorization was sufficient to create a CFAA violation for later access, even though a violation of the Facebook terms alone was not. Service providers will likely argue that this is the sort of non-technical access restriction that was left unresolved by Van Buren.Court’s Narrow CFAA Interpretation Should Help Security Researchers
Even though the majority opinion left this important CFAA question unresolved, the decision still offers plenty of language that will be helpful for later cases on the scope of the statute. That’s because the Van Buren majority’s focus on the CFAA’s technical definitions, and the types of computer access that the law restricts, should provide guidance to lower courts that narrow the law’s reach.
This is a win because broad CFAA interpretations have in the past often deterred or chilled important security research and investigative journalism. The CFAA put these activities in legal jeopardy, in part, because courts often struggle with using non-digital legal concepts and physical analogies to interpret the statute. Indeed, one of the principle disagreements between the Van Buren majority and dissent is whether the CFAA should be interpreted based on physical property law doctrines, such as trespass and theft.
The majority opinion ruled that, in principle, computer access is different from the physical world precisely because the CFAA contains so many technical terms and definitions. “When interpreting statutes, courts take note of terms that carry ‘technical meaning[s],’” the majority wrote.
The rule is particularly true for the CFAA because it focuses on malicious computer use and intrusions, the majority wrote. For example, the term “access” in the context of computer use has its own specific, well established meaning: “In the computing context, ‘access’ references the act of entering a computer ‘system itself’ or a particular ‘part of a computer system,’ such as files, folders, or databases.” Based on that definition, the CFAA’s “exceeding authorized access” restriction should be limited to prohibiting “the act of entering a part of the system to which a computer user lacks access privileges.”
The majority also recognized that the portions of the CFAA that define damage and loss are premised on harm to computer files and data, rather than general non-digital harm such as trespassing on another person’s property: “The statutory definitions of ‘damage’ and ‘loss’ thus focus on technological harms—such as the corruption of files—of the type unauthorized users cause to computer systems and data,” the Court wrote. This is important because loss and damage are prerequisites to civil CFAA claims, and the ability of private entities to enforce the CFAA has been a threat that deters security research when companies might rather their vulnerabilities remain unknown to the public.
Because the CFAA’s definitions of loss and damages focus on harm to computer files, systems, or data, the majority wrote that they “are ill fitted, however, to remediating ‘misuse’ of sensitive information that employees may permissibly access using their computers.”
The Supreme Court’s Van Buren decision rightly limits the CFAA’s prohibition on “exceeding authorized access” to prohibiting someone from accessing particular computer files, services, or other parts of the computer that are otherwise off-limits to them. And the Court’s overturning the Eleventh Circuit decision that permitted CFAA liability based on someone violating a website’s terms of service or an employers’ computer use restrictions ensures that lots of important, legitimate computer use is not a crime.
But there is still more work to be done to ensure that computer crime laws are not misused against researchers, journalists, activists, and everyday internet users. As longtime advocates against overbroad interpretations of the CFAA, EFF will continue to lead efforts to push courts and lawmakers to further narrow the CFAA and similar state computer crime laws so they can no longer be misused.Related Cases: Van Buren v. United States